Gallery
[New] Concise and Practical AI/ML
Share
Explore
Artificial Intelligence

icon picker
Concepts

Basics

Feedforward

Dynamic Programming is the process that next variables are calculated from previous variables. Dynamic Programming can be seen clearly in the Feedforward process of machine learning (ML).

Backprop

Dynamic Optimization is the process of Dynamic Programming with Optimization, it doesn’t just calculate next variables based on formulae but optimizing them too; in short, it is called Dynamic Optimization. This Dynamic Optimization process is used in Backpropagation for optimizing weights and biases (both are called params).

Training

The process of feedforward and backprop with data of all cases.

Fine-tuning

The process of re-optimizing model with new data for specific cases after the ML model has already been trained.

PEFT

Param-Efficient Fine-Tuning (PEFT) is the process of fine-tuning with only a selection of params (weights, biases) to be updated to fast fine-tuning.

LoRA

Low-Rank Adaption (LoRA), a method of PEFT which decompose large matrices into matrices of lower-ranks (fewer dimensions) for fast update.

Others

Terms

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.