Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • icon picker
        Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

Concepts

Basics

Feedforward

Dynamic Programming is the process that next variables are calculated from previous variables. Dynamic Programming can be seen clearly in the Feedforward process of machine learning (ML).

Backprop

Dynamic Optimization is the process of Dynamic Programming with Optimization, it doesn’t just calculate next variables based on formulae but optimizing them too; in short, it is called Dynamic Optimization. This Dynamic Optimization process is used in Backpropagation for optimizing weights and biases (both are called params).

Training

The process of feedforward and backprop with data of all cases.

Fine-tuning

The process of re-optimizing model with new data for specific cases after the ML model has already been trained.

PEFT

Param-Efficient Fine-Tuning (PEFT) is the process of fine-tuning with only a selection of params (weights, biases) to be updated to fast fine-tuning.

LoRA

Low-Rank Adaption (LoRA), a method of PEFT which decompose large matrices into matrices of lower-ranks (fewer dimensions) for fast update.

Others

Terms

 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.