Common loss function are listed below. See the page for notation and meanings of symbols. And m below is the number of neurons in output layer. MAE
Mean Absolute Error. This is not a good function, it’s not smooth.
MSE
Mean Squared Error. This loss function is common and good for regression, but can be used in classification too.
CE
Cross-Entropy. This loss function can work with both sigmoid and softmax activations.
CCE
Categorical Cross-Entropy. This loss function is common and good for classification; this loss function should work with softmax activation only.