Gallery
[New] Concise and Practical AI/ML
Share
Explore
Neuralnet

icon picker
Neuralnet Alphabet

Terms in Neuralnet

Weights: For separation
Bias: For shifting separation line
Activation function: For limiting cases
Loss function: For calculating loss (error)
Derivative of loss: For calculating gradients
Derivative of activation: Used during backpropagation

Alphabet

Here contains an alphabet of terms used in neuralnet (neural network):
For the other terms in Q-learning, the alphabet is all uppercase letters to avoid collision.
a - A coefficient, or constant
b - A bias
c - A concatenation in the middle of network
d - Dot product
e - Error or loss
f - Activation function
g - Gradient
h - Hidden layer output
i - Iterator variable
j - Iterator variable
k - Iterator variable
l - Not used, confusing with number 1
m - Number of neurons in a layer
n - Number of layers
o - Not used, confused with number 0
p - Probability P in reinforcement learning
q - Probability Q in reinforcement learning
r - Learning rate
s - Subtraction (u-y) or also called delta
t - Derivative of activation function
u - Output (of feedforward)
v - Backpropagation intermediate value
w - Weight
x - Input
y - True output (or expected output)
z - Latent vector
More terms:
we - A weight on loss node, connected from an output node
fe - Loss function
te - Derivative of loss function
ge - Gradient of loss function
inp - Whole training set of x (input)
exp - Whole training set of y (expected)
wrt - With Respect To

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.