Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • icon picker
        Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

Feedforward

Feedforward is a dynamic programming process which feeds input to the network and calculate to the last layer and loss node without optimizing; it is used in training, also in inference.
image.png
A sample network for using in feedforward and backpropagation, and finding the formula for backpropagation is with 2 layers, 2 neurons in each layer, and also 2 input values:
At least 2 layers to make a network
At least 2 output neurons to make sense of the loss function
The network learns the summarised way so the first layer should have at least the same neurons with the output layer.
It doesn’t make sense using 1 input value in generic ML case, so 2 input values are used.

Diagram

Where
x1, x2 are input values
w1 to w8 are weights at dendrites of neurons
b1 to b4 are the biases of neurons
d1 to d4 are the dot-products inside neurons
h1 to h4 are the hidden outputs of neurons
u1 and u2 are the final outputs of last layer neurons
fe is the loss function
e is the loss value

Mathematics

Basic neuron uses dot-product in its nucleus, and the activation function before axon can be any. This feedforward is the standard way which is described in many books and articles.

The Dot Products

All the dot-products inside neurons: ​
image.png

The Activations

Consider that all neurons use the same activation function f. ​
image.png

The Loss Function

Consider using the common MSE (Mean Squared Error) loss function to easily get gradient function (derivative) of loss function.
Final loss: ​
image.png
Feedforward is ending at this last value which is the final loss. The next computation will go in .
 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.