Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • icon picker
          Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

Neuralnet Alphabet

Terms in Neuralnet

Weights: For separation
Bias: For shifting separation line
Activation function: For limiting cases
Loss function: For calculating loss (error)
Derivative of loss: For calculating gradients
Derivative of activation: Used during backpropagation

Alphabet

Here contains an alphabet of terms used in neuralnet (neural network):
For the other terms in Q-learning, the alphabet is all uppercase letters to avoid collision.
a - A coefficient, or constant
b - A bias
c - A concatenation in the middle of network
d - Dot product
e - Error or loss
f - Activation function
g - Gradient
h - Hidden layer output
i - Iterator variable
j - Iterator variable
k - Iterator variable
l - Not used, confusing with number 1
m - Number of neurons in a layer
n - Number of layers
o - Not used, confused with number 0
p - Probability P in reinforcement learning
q - Probability Q in reinforcement learning
r - Learning rate
s - Subtraction (u-y) or also called delta
t - Derivative of activation function
u - Output (of feedforward)
v - Backpropagation intermediate value
w - Weight
x - Input
y - True output (or expected output)
z - Latent vector
More terms:
we - A weight on loss node, connected from an output node
fe - Loss function
te - Derivative of loss function
ge - Gradient of loss function
inp - Whole training set of x (input)
exp - Whole training set of y (expected)
wrt - With Respect To
Q-learning (q-table, q-network):
a - Action
b
c - Cost
dc - Discount (d - SL dotproduct)
e
f
gn - Gain (g - SL gradient)
h
i
j
k
l - [Unused]
m
n
o - [Unused]
p - Policy
q - Quality
rw - Reward (r - SL learning rate)
rr - Rate of RL
s - State
t - Time
td - Temporal Difference
u
v - Value (by fixed policy)
w
x
y
z
 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.