Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • icon picker
          Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

Loss Functions

Common loss function are listed below. See the page for notation and meanings of symbols. And m below is the number of neurons in output layer.

All-matching Loss

MAE

Mean Absolute Error. This is not a good function, it’s not smooth. ​
image.png

MSE

Mean Squared Error. This loss function is common and good for regression, but can be used in classification too. ​
image.png

CE

Cross-Entropy. This loss function can work with both sigmoid and softmax activations.

CCE

Categorical Cross-Entropy. This loss function is common and good for classification; this loss function should work with softmax activation only.

Multi-objective Losses

Selective Loss

An all-matching loss will match all output value. A selective loss will prioritize some in output.

MOL

Multi-objective losses means having multiple loss nodes after output layer, eg.
One all-matching loss
One loss to match some values in output only
 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.