Limiters
There are 3 types of limiters in neuralnet, they are: Normalization, Regularization, and Threshold Function (activation function), with different usefulnesses.
Normalization
Normalization is for the weights and bias to avoid adapting with the infinite range of input, which is that infinity can’t be learnt.
Regularization
Regularization is to punish weight change so the weights and biases won’t comsume the whole infinite range of value, leave spare rooms for unknown cases.
Threshold Function
Threshold function (activation function) is optional in regression but kinda required in classification, it limits the outputs into cases, even integers. It limits output, not separate inputs which is done by params.
Train-Test Split
Training data should have multiple entries with similar values for each case. In such condition, similar cases can be split into training set and test set, to verify during training if the model is generalizing well.
Fine-tuning
Full Fine-tuning
Fine-tune all params.
PEFT (Param-Efficient Fine-Tuning)
Fine-tune some params.
LoRA (Low-Rank Adaption PEFT)
Fine-tune by adding related params.
Prefix PEFT
Fine-tune the first some layers.
Adaptor PEFT
Fine-tune by injecting a layer in middle.
Problems in Training
Overfitting
Explosion
Vanish
Imbalanced Data
Data set for training should be balanced, not too many of 1 output and so few of another.