Skip to content
Concise and Practical AI/ML
  • Pages
    • Preface
    • What are AI and ML
    • Mathematics Recap
      • Calculus
      • Algebra
    • Libraries to Use
    • Models for ML
    • Methods of ML
    • Neuralnet Alphabet
    • Neuralnet
      • Neuron
        • Types of Neurons
        • Input Separations
        • Activation Functions
      • Layers in Network
      • Loss Functions
      • Gradient Descent
      • Feedforward
      • Backpropagation
      • Optimisers & Training
      • Techniques in ML
        • Normalisation
        • Regularisation
        • Concatenation
        • Boosted & Combinatory
        • Heuristic Hyperparams
      • Problems in Neuralnet
        • Overfitting
        • Explosion and Vanishing
    • Supervised Learning
      • Regression
      • Classification
    • Reinforcement Learning
      • Concepts
      • Learning Tactics
      • Policy Network
      • Bellman Equation
      • Q-table
      • Q-network
    • Unsupervised Learning
      • Some Applications
    • Incremental Learning
    • Case Studies
      • Algorithm Approximator
      • Regression
      • Classification
      • Sequence Learning
      • Pattern Learning
      • Generative
    • Notable Mentions

Neuralnet Alphabet

Here contains an alphabet of terms used in neuralnet (neural network):
a - A coefficent, or constant
b - A bias
c - A concatenation in the middle of network
d - Dot product
e - Error or loss
f - Activation function
g - Gradient
h - Hidden layer output
i - Iterator variable
j - Iterator variable
k - Iterator variable
l - Not used, confusing with number 1
m - Number of neurons in a layer
n - Number of layers
o - Not used, confused with number 0
p - Probability P in reinforcement learning
q - Probability Q in reinforcement learning
r - Learning rate
s - Subtraction (u-y) or also called delta
t - Derivative of activation function
u - Output (of feedforward)
v - Backpropagation intermediate value
w - Weight
x - Input
y - True output (or expected output)
z - Latent vector
More terms:
we - A weight on loss node, connected from an output node
fe - Loss function
te - Derivative of loss function
ge - Gradient of loss function
inp - Whole training set of x (input)
exp - Whole training set of y (expected)
wrt - With Respect To

 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.