Gallery
Concise and Practical AI/ML
Share
Explore
Neuralnet

Feedforward

A sample network for using in feedforward and backpropagation, and finding the formula for backpropagation is with 2 layers, 2 neurons in each layer, and also 2 input values:
At least 2 layers to make a network
At least 2 output neurons to make sense of the loss function
The network learns the summarised way so the first layer should have at least the same neurons with the output layer.
It doesn’t make sense using 1 input value in generic ML case, so 2 input values are used.

Diagram

image.png
Where
x1, x2 are input values
w1 to w8 are weights at dendrites of neurons
b1 to b4 are the biases of neurons
d1 to d4 are the dot-products inside neurons
h1 to h4 are the hidden outputs of neurons
u1 and u2 are the final outputs of last layer neurons
fe is the loss function
e is the loss value

Mathematics

Basic neuron uses dot-product in its nucleus, and the activation function before axon can be any. This feedforward is the standard way which is described in many books and articles.

The Dot Products

All the dot-products inside neurons:
image.png
image.png
image.png
image.png

The Activations

Consider that all neurons use the same activation function f.
Hidden outputs:
image.png
,
image.png
Final outputs:
image.png
,
image.png

The Loss Function

Consider using the common MSE (Mean Squared Error) loss function to easily get gradient function (derivative) of loss function.
Final loss:
image.png

Feedforward is ending at this last value which is the final loss. The next computation will go in .

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.