Common loss function are listed below. See the page for notation and meanings of symbols. And m below is the number of neurons in output layer. All-matching Loss
MAE
Mean Absolute Error. This is not a good function, it’s not smooth.
MSE
Mean Squared Error. This loss function is common and good for regression, but can be used in classification too.
CE
Cross-Entropy. This loss function can work with both sigmoid and softmax activations.
CCE
Categorical Cross-Entropy. This loss function is common and good for classification; this loss function should work with softmax activation only.
Multi-objective Losses
Selective Loss
An all-matching loss will match all output value. A selective loss will prioritize some in output.
MOL
Multi-objective losses means having multiple loss nodes after output layer, eg.
One loss to match some values in output only