Gallery
[New] Concise and Practical AI/ML
Share
Explore

Limiters

There are 3 types of limiters in neuralnet, they are: Normalization, Regularization, and Threshold Function (activation function), with different usefulnesses.

Normalization

Normalization is for the weights and bias to avoid adapting with the infinite range of input, which is that infinity can’t be learnt.

Regularization

Regularization is to punish weight change so the weights and biases won’t comsume the whole infinite range of value, leave spare rooms for unknown cases.

Threshold Function

Threshold function (activation function) is optional in regression but kinda required in classification, it limits the outputs into cases, even integers. It limits output, not separate inputs which is done by params.

Train-Test Split

Training data should have multiple entries with similar values for each case. In such condition, similar cases can be split into training set and test set, to verify during training if the model is generalizing well.

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.