A sample network for using in feedforward and backpropagation, and finding the formula for backpropagation is with 2 layers, 2 neurons in each layer, and also 2 input values:
At least 2 layers to make a network At least 2 output neurons to make sense of the loss function The network learns the summarised way so the first layer should have at least the same neurons with the output layer. It doesn’t make sense using 1 input value in generic ML case, so 2 input values are used. Diagram
Where
w1 to w8 are weights at dendrites of neurons b1 to b4 are the biases of neurons d1 to d4 are the dot-products inside neurons h1 to h4 are the hidden outputs of neurons u1 and u2 are the final outputs of last layer neurons Mathematics
Basic neuron uses dot-product in its nucleus, and the activation function before axon can be any. This feedforward is the standard way which is described in many books and articles.
The Dot Products
All the dot-products inside neurons:
The Activations
Consider that all neurons use the same activation function f.
Hidden outputs:
Final outputs:
The Loss Function
Consider using the common MSE (Mean Squared Error) loss function to easily get gradient function (derivative) of loss function.
Final loss:
Feedforward is ending at this last value which is the final loss. The next computation will go in .