📍 Roadmap: Phase 1 (Foundations + Classical ML)
Week R – Recap & Consolidation
Goal: Cement all basics you’ve already touched so you don’t carry gaps forward.
Cleaning, EDA, preprocessing. Build: End-to-end ML pipeline on Titanic dataset: Data cleaning + preprocessing. Logistic Regression baseline. Random Forest as second model. Share: 1-page reflection on what’s strong vs what needs practice. Output: Notebook + README in GitHub (recap_project/). Week 6 – Decision Trees & Random Forests
Goal: Learn trees + bagging ensembles.
Decision Trees (splits, gini vs entropy). Random Forests (bagging, feature importance). Compare Decision Tree vs Logistic Regression on Titanic. Visualize a small tree (plot_tree). Check overfitting with different depths. Share: Short blog/LinkedIn post: “When should you use trees vs linear models?” Extra (optional): Try XGBoost. Output: trees_vs_logreg/ project + visualization image + comparison table. Week 7 – Gradient Boosting (XGBoost, LightGBM, CatBoost)
Goal: Learn boosting ensembles, now standard in applied ML.
XGBoost basics (tree depth, learning rate). Why boosting often wins Kaggle competitions. Train XGBoost on Titanic or House Prices dataset. Compare with Random Forest. Tune hyperparameters (GridSearchCV or RandomizedSearchCV). Share: Small chart showing model performance improvements. Output: boosting_basics/ repo with notebook + README. Week 8 – Support Vector Machines (SVM)
Goal: Learn margin-based models + kernel tricks.
SVM concepts (margin, support vectors). Use sklearn.SVC on a 2D dataset (visualize decision boundaries). Apply SVM to Titanic or MNIST digits. Share: Side-by-side plot: Logistic Regression vs SVM on same dataset. Output: svm_intro/ repo with boundary plots + comparison table. Week 9 – Model Evaluation & Cross-Validation
Goal: Learn robust evaluation beyond train/test split.
Compare CV results for Logistic Regression, Random Forest, XGBoost. Show how evaluation differs across folds. Share: Notebook snippet: “Why accuracy isn’t enough: precision/recall/F1 example.” Output: evaluation_methods/ repo with notebook + README. Week 10 – Feature Engineering & Selection
Goal: Learn how to make models smarter with better features.
Encoding categorical variables (One-Hot, Target Encoding). Feature importance (permutation, SHAP). Try feature engineering on Titanic dataset. Compare baseline vs engineered features. Share: Table showing improvement after adding engineered features. Output: feature_engineering/ repo with before/after performance. Week 11 – Mini Capstone (Classical ML)
Goal: Consolidate classical ML into a showcase project.
Pick a fresh dataset from Kaggle (not Titanic/House Prices). Apply full pipeline: clean → engineer features → train multiple models (LR, RF, XGB, SVM). Use cross-validation + report metrics. Share: Medium-style blog: “My first end-to-end ML pipeline.” Output: classical_ml_capstone/ repo with complete project + write-up. 📍 Roadmap: Phase 2 (Deep Learning Fundamentals)
Week 12 – Neural Network Basics (MLP)
Goal: Understand how neural nets work at the most basic level.
Perceptron → Multi-Layer Perceptron (MLP). Activation functions (ReLU, Sigmoid, Tanh). Backpropagation & gradient descent (conceptual). Train a small MLP with PyTorch/TensorFlow on MNIST digits. Compare with Logistic Regression baseline. Share: Visualize learned weights for first hidden layer. Output: mlp_basics/ repo with notebook + README. Week 13 – Training Deep Networks
Goal: Learn how to train deeper nets effectively.
Optimization algorithms (SGD, Adam). Learning rate scheduling. Vanishing/exploding gradients. Compare SGD vs Adam on MNIST. Show effect of batch size & learning rate. Share: Small table/plot of training curves with different optimizers. Output: training_deep_nets/ repo with training comparison notebook. Week 14 – Convolutional Neural Networks (CNNs)
Goal: Learn how CNNs work for vision tasks.
Convolutions, filters, feature maps. CNN architecture basics (LeNet, AlexNet). Compare with MLP on same dataset. Share: Visualization of learned CNN filters. Output: cnn_basics/ repo with notebook + filter visualization. Week 15 – Regularization & Generalization
Goal: Prevent overfitting in deep nets.
Dropout, batch normalization, weight decay. Data augmentation for images. Train a CNN with and without dropout + batch norm. Show difference in training vs validation curves. Share: Blog/LinkedIn post: “Why deep nets memorize, and how to stop it.” Output: regularization_cnn/ repo with experiment notebook. Week 16 – Recurrent Neural Networks (RNNs)
Goal: Learn sequence models.
RNN basics (sequence modeling). Vanishing gradient problem in RNNs. Train an RNN (or LSTM) on IMDB sentiment dataset. Compare bag-of-words model vs RNN. Share: Side-by-side performance of traditional ML vs RNN for text. Output: rnn_basics/ repo with notebook + README. Week 17 – Attention Mechanisms
Goal: Understand why attention replaced RNNs in many tasks.
Idea of attention (focus on relevant parts of sequence). Encoder-decoder with attention. From attention to Transformers (conceptual preview). Implement simple attention-based sentiment model. Compare LSTM vs Attention-LSTM. Share: Visualization of attention weights for a sample sentence. Output: attention_intro/ repo with notebook + attention map. Week 18 – Transformers Intro
Goal: Get comfortable with the backbone of modern AI.
Transformer architecture (encoder, decoder, self-attention). Why transformers scale better than RNNs. Train a tiny Transformer on text classification (PyTorch/TensorFlow). Compare with RNN baseline. Share: Write “My first Transformer: how it compares to RNNs.” Output: transformer_intro/ repo with minimal implementation. 📍 Roadmap: Phase 3 (Applied Deep Learning & Transfer Learning)
Week 19 – Transfer Learning Basics
Goal: Learn how to reuse pretrained models effectively.
Concept of transfer learning (feature reuse). Fine-tuning vs feature extraction. Popular pretrained models (ResNet, VGG). Fine-tune ResNet18 on a custom dataset (e.g., flowers dataset). Share: Side-by-side comparison: training from scratch vs transfer learning. Output: transfer_learning_basics/ repo with notebook. Week 20 – Hugging Face Transformers (Text)
Goal: Get hands-on with Hugging Face ecosystem.
Tokenizers, model hub, pipelines. How to fine-tune BERT for classification. Sentiment classifier using Hugging Face BERT. Share: Blog: “How I fine-tuned BERT in under an hour.” Output: huggingface_text/ repo with notebook. Week 21 – Hugging Face Transformers (Vision)
Goal: Use Hugging Face for computer vision.
Vision Transformers (ViT). Hugging Face Vision pipelines. Fine-tune ViT on CIFAR-10. Share: Results comparison: ResNet (last week) vs ViT. Output: huggingface_vision/ repo with notebook. Week 22 – Multimodal Basics (Text + Images)
Goal: First step into multimodal AI.
CLIP: aligning text and images. Zero-shot classification with CLIP. Use CLIP to match images with captions. Try zero-shot image classification. Share: Demo notebook of CLIP matching images to text prompts. Output: multimodal_clip/ repo. Week 23 – Deployment 101 (Web + API)
Goal: Make models usable outside notebooks.
Model saving/loading (PyTorch .pt / Hugging Face .save_pretrained). Basics of FastAPI/Flask for ML APIs. Running inference efficiently. Wrap BERT classifier into a FastAPI endpoint. Test it with sample requests. Share: Short LinkedIn video: “Querying my AI model with an API.” Output: model_deployment_api/ repo. Week 24 – Deployment 102 (Frontend Integration)
Goal: Leverage your frontend skills + AI models.
How to call APIs from a React/Next.js frontend. Hosting basics (Render, Vercel, Hugging Face Spaces). Build a simple web app that uses your BERT API for live sentiment analysis. Share: Deploy to Hugging Face Spaces + share link. Output: ai_sentiment_webapp/ repo with backend + frontend. Week 25 – End-to-End Mini Project
Goal: Bring together ML + DL + deployment into one small project.
End-to-end AI workflow (data → model → deployment → UI). Importance of documenting ML projects. AI Resume Screener (text classification). Fake News Detector (text + Hugging Face). Product Review Analyzer (sentiment + visualization). Share: Case study write-up with architecture diagram + lessons learned. Output: mini_capstone/ repo. Week 26 – Review & Expansion
Goal: Consolidate skills + prep for next phase.