Skip to content
Concise and Practical AI/ML
  • Pages
    • Preface
    • What are AI and ML
    • Mathematics Recap
      • Calculus
      • Algebra
    • Libraries to Use
    • Models for ML
    • Methods of ML
    • Neuralnet Alphabet
    • Neuralnet
      • Neuron
        • Types of Neurons
        • Input Separations
        • Activation Functions
      • Layers in Network
      • Loss Functions
      • Gradient Descent
      • Feedforward
      • Backpropagation
      • Optimisers & Training
      • Techniques in ML
        • Normalisation
        • Regularisation
        • Concatenation
        • Boosted & Combinatory
        • Heuristic Hyperparams
      • Problems in Neuralnet
        • Overfitting
        • Explosion and Vanishing
    • Supervised Learning
      • Regression
      • Classification
    • Reinforcement Learning
      • Concepts
      • Learning Tactics
      • Policy Network
      • Bellman Equation
      • Q-table
      • Q-network
    • Unsupervised Learning
      • Some Applications
    • Incremental Learning
    • Case Studies
      • Algorithm Approximator
      • Regression
      • Classification
      • Sequence Learning
      • Pattern Learning
      • Generative
    • Notable Mentions

Feedforward

A sample network for using in feedforward and backpropagation, and finding the formula for backpropagation is with 2 layers, 2 neurons in each layer, and also 2 input values:
At least 2 layers to make a network
At least 2 output neurons to make sense of the loss function
The network learns the summarised way so the first layer should have at least the same neurons with the output layer.
It doesn’t make sense using 1 input value in generic ML case, so 2 input values are used.

Diagram

image.png
Where
x1, x2 are input values
w1 to w8 are weights at dendrites of neurons
b1 to b4 are the biases of neurons
d1 to d4 are the dot-products inside neurons
h1 to h4 are the hidden outputs of neurons
u1 and u2 are the final outputs of last layer neurons
fe is the loss function
e is the loss value

Mathematics

Basic neuron uses dot-product in its nucleus, and the activation function before axon can be any. This feedforward is the standard way which is described in many books and articles.

The Dot Products

All the dot-products inside neurons:
image.png
image.png
image.png
image.png

The Activations

Consider that all neurons use the same activation function f.
Hidden outputs:
image.png
,
image.png
Final outputs:
image.png
,
image.png

The Loss Function

Consider using the common MSE (Mean Squared Error) loss function to easily get gradient function (derivative) of loss function.
Final loss:
image.png

Feedforward is ending at this last value which is the final loss. The next computation will go in .

 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.