Gallery
Concise and Practical AI/ML
Share
Explore

Neuralnet Alphabet

Here contains an alphabet of terms used in neuralnet (neural network):
a - A coefficent, or constant
b - A bias
c - A concatenation in the middle of network
d - Dot product
e - Error or loss
f - Activation function
g - Gradient
h - Hidden layer output
i - Iterator variable
j - Iterator variable
k - Iterator variable
l - Not used, confusing with number 1
m - Number of neurons in a layer
n - Number of layers
o - Not used, confused with number 0
p - Probability P in reinforcement learning
q - Probability Q in reinforcement learning
r - Learning rate
s - Subtraction (u-y) or also called delta
t - Derivative of activation function
u - Output (of feedforward)
v - Backpropagation intermediate value
w - Weight
x - Input
y - True output (or expected output)
z - Latent vector
More terms:
we - A weight on loss node, connected from an output node
fe - Loss function
te - Derivative of loss function
ge - Gradient of loss function
inp - Whole training set of x (input)
exp - Whole training set of y (expected)
wrt - With Respect To

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.