Skip to content
Share
Explore

more realistic plan

📍 Roadmap: Phase 1 (Foundations + Classical ML)

Week R – Recap & Consolidation

Goal: Cement all basics you’ve already touched so you don’t carry gaps forward.
Learn/Revise:
Python/pandas basics.
Cleaning, EDA, preprocessing.
Build: End-to-end ML pipeline on Titanic dataset:
Data cleaning + preprocessing.
Logistic Regression baseline.
Random Forest as second model.
Save model with joblib.
Share: 1-page reflection on what’s strong vs what needs practice.
Output: Notebook + README in GitHub (recap_project/).

Week 6 – Decision Trees & Random Forests

Goal: Learn trees + bagging ensembles.
Learn:
Decision Trees (splits, gini vs entropy).
Random Forests (bagging, feature importance).
Build:
Compare Decision Tree vs Logistic Regression on Titanic.
Visualize a small tree (plot_tree).
Check overfitting with different depths.
Share: Short blog/LinkedIn post: “When should you use trees vs linear models?”
Extra (optional): Try XGBoost.
Output: trees_vs_logreg/ project + visualization image + comparison table.

Week 7 – Gradient Boosting (XGBoost, LightGBM, CatBoost)

Goal: Learn boosting ensembles, now standard in applied ML.
Learn:
Boosting vs Bagging.
XGBoost basics (tree depth, learning rate).
Why boosting often wins Kaggle competitions.
Build:
Train XGBoost on Titanic or House Prices dataset.
Compare with Random Forest.
Tune hyperparameters (GridSearchCV or RandomizedSearchCV).
Share: Small chart showing model performance improvements.
Output: boosting_basics/ repo with notebook + README.

Week 8 – Support Vector Machines (SVM)

Goal: Learn margin-based models + kernel tricks.
Learn:
SVM concepts (margin, support vectors).
Linear vs RBF kernels.
Build:
Use sklearn.SVC on a 2D dataset (visualize decision boundaries).
Apply SVM to Titanic or MNIST digits.
Share: Side-by-side plot: Logistic Regression vs SVM on same dataset.
Output: svm_intro/ repo with boundary plots + comparison table.

Week 9 – Model Evaluation & Cross-Validation

Goal: Learn robust evaluation beyond train/test split.
Learn:
k-Fold CV.
Stratified sampling.
Bias-variance tradeoff.
Build:
Compare CV results for Logistic Regression, Random Forest, XGBoost.
Show how evaluation differs across folds.
Share: Notebook snippet: “Why accuracy isn’t enough: precision/recall/F1 example.”
Output: evaluation_methods/ repo with notebook + README.

Week 10 – Feature Engineering & Selection

Goal: Learn how to make models smarter with better features.
Learn:
Encoding categorical variables (One-Hot, Target Encoding).
Scaling + normalization.
Feature importance (permutation, SHAP).
Build:
Try feature engineering on Titanic dataset.
Compare baseline vs engineered features.
Share: Table showing improvement after adding engineered features.
Output: feature_engineering/ repo with before/after performance.

Week 11 – Mini Capstone (Classical ML)

Goal: Consolidate classical ML into a showcase project.
Build:
Pick a fresh dataset from Kaggle (not Titanic/House Prices).
Apply full pipeline: clean → engineer features → train multiple models (LR, RF, XGB, SVM).
Use cross-validation + report metrics.
Share: Medium-style blog: “My first end-to-end ML pipeline.”
Output: classical_ml_capstone/ repo with complete project + write-up.

📍 Roadmap: Phase 2 (Deep Learning Fundamentals)

Week 12 – Neural Network Basics (MLP)

Goal: Understand how neural nets work at the most basic level.
Learn:
Perceptron → Multi-Layer Perceptron (MLP).
Activation functions (ReLU, Sigmoid, Tanh).
Backpropagation & gradient descent (conceptual).
Build:
Train a small MLP with PyTorch/TensorFlow on MNIST digits.
Compare with Logistic Regression baseline.
Share: Visualize learned weights for first hidden layer.
Output: mlp_basics/ repo with notebook + README.

Week 13 – Training Deep Networks

Goal: Learn how to train deeper nets effectively.
Learn:
Optimization algorithms (SGD, Adam).
Learning rate scheduling.
Vanishing/exploding gradients.
Build:
Compare SGD vs Adam on MNIST.
Show effect of batch size & learning rate.
Share: Small table/plot of training curves with different optimizers.
Output: training_deep_nets/ repo with training comparison notebook.

Week 14 – Convolutional Neural Networks (CNNs)

Goal: Learn how CNNs work for vision tasks.
Learn:
Convolutions, filters, feature maps.
Pooling layers.
CNN architecture basics (LeNet, AlexNet).
Build:
Train a CNN on CIFAR-10.
Compare with MLP on same dataset.
Share: Visualization of learned CNN filters.
Output: cnn_basics/ repo with notebook + filter visualization.

Week 15 – Regularization & Generalization

Goal: Prevent overfitting in deep nets.
Learn:
Dropout, batch normalization, weight decay.
Data augmentation for images.
Build:
Train a CNN with and without dropout + batch norm.
Show difference in training vs validation curves.
Share: Blog/LinkedIn post: “Why deep nets memorize, and how to stop it.”
Output: regularization_cnn/ repo with experiment notebook.

Week 16 – Recurrent Neural Networks (RNNs)

Goal: Learn sequence models.
Learn:
RNN basics (sequence modeling).
LSTM and GRU concepts.
Vanishing gradient problem in RNNs.
Build:
Train an RNN (or LSTM) on IMDB sentiment dataset.
Compare bag-of-words model vs RNN.
Share: Side-by-side performance of traditional ML vs RNN for text.
Output: rnn_basics/ repo with notebook + README.

Week 17 – Attention Mechanisms

Goal: Understand why attention replaced RNNs in many tasks.
Learn:
Idea of attention (focus on relevant parts of sequence).
Encoder-decoder with attention.
From attention to Transformers (conceptual preview).
Build:
Implement simple attention-based sentiment model.
Compare LSTM vs Attention-LSTM.
Share: Visualization of attention weights for a sample sentence.
Output: attention_intro/ repo with notebook + attention map.

Week 18 – Transformers Intro

Goal: Get comfortable with the backbone of modern AI.
Learn:
Transformer architecture (encoder, decoder, self-attention).
Positional encoding.
Why transformers scale better than RNNs.
Build:
Train a tiny Transformer on text classification (PyTorch/TensorFlow).
Compare with RNN baseline.
Share: Write “My first Transformer: how it compares to RNNs.”
Output: transformer_intro/ repo with minimal implementation.

📍 Roadmap: Phase 3 (Applied Deep Learning & Transfer Learning)

Week 19 – Transfer Learning Basics

Goal: Learn how to reuse pretrained models effectively.
Learn:
Concept of transfer learning (feature reuse).
Fine-tuning vs feature extraction.
Popular pretrained models (ResNet, VGG).
Build:
Fine-tune ResNet18 on a custom dataset (e.g., flowers dataset).
Share: Side-by-side comparison: training from scratch vs transfer learning.
Output: transfer_learning_basics/ repo with notebook.

Week 20 – Hugging Face Transformers (Text)

Goal: Get hands-on with Hugging Face ecosystem.
Learn:
Tokenizers, model hub, pipelines.
How to fine-tune BERT for classification.
Build:
Sentiment classifier using Hugging Face BERT.
Share: Blog: “How I fine-tuned BERT in under an hour.”
Output: huggingface_text/ repo with notebook.

Week 21 – Hugging Face Transformers (Vision)

Goal: Use Hugging Face for computer vision.
Learn:
Vision Transformers (ViT).
Hugging Face Vision pipelines.
Build:
Fine-tune ViT on CIFAR-10.
Share: Results comparison: ResNet (last week) vs ViT.
Output: huggingface_vision/ repo with notebook.

Week 22 – Multimodal Basics (Text + Images)

Goal: First step into multimodal AI.
Learn:
What multimodal means.
CLIP: aligning text and images.
Zero-shot classification with CLIP.
Build:
Use CLIP to match images with captions.
Try zero-shot image classification.
Share: Demo notebook of CLIP matching images to text prompts.
Output: multimodal_clip/ repo.

Week 23 – Deployment 101 (Web + API)

Goal: Make models usable outside notebooks.
Learn:
Model saving/loading (PyTorch .pt / Hugging Face .save_pretrained).
Basics of FastAPI/Flask for ML APIs.
Running inference efficiently.
Build:
Wrap BERT classifier into a FastAPI endpoint.
Test it with sample requests.
Share: Short LinkedIn video: “Querying my AI model with an API.”
Output: model_deployment_api/ repo.

Week 24 – Deployment 102 (Frontend Integration)

Goal: Leverage your frontend skills + AI models.
Learn:
How to call APIs from a React/Next.js frontend.
Hosting basics (Render, Vercel, Hugging Face Spaces).
Build:
Build a simple web app that uses your BERT API for live sentiment analysis.
Share: Deploy to Hugging Face Spaces + share link.
Output: ai_sentiment_webapp/ repo with backend + frontend.

Week 25 – End-to-End Mini Project

Goal: Bring together ML + DL + deployment into one small project.
Learn:
End-to-end AI workflow (data → model → deployment → UI).
Importance of documenting ML projects.
Build:
Example project ideas:
AI Resume Screener (text classification).
Fake News Detector (text + Hugging Face).
Product Review Analyzer (sentiment + visualization).
Share: Case study write-up with architecture diagram + lessons learned.
Output: mini_capstone/ repo.

Week 26 – Review & Expansion

Goal: Consolidate skills + prep for next phase.
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.