Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • icon picker
          Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

Input Separation

The separation of inputs (classify that an input is of which kind) is done by nucleus (eg. dot product for basic neuron). The activation function doesn’t do separation, it does output limitation for avoid broad range of output gets saturated; that is why identity activation function doesn’t work well, while ReLU or sigmoid work perfectly.

The Confusion

Many beginners think that the separation is done by activation function (threshold function) but that’s wrong, the separation is done by the params (weights and biases) which create the line, plane, etc. of separation. Activation function is the output limiter, see .

Bias Weight

Bias is the additional weight added to every neuron. It is perfectly and absolutely important for a neuron to do separation. Without bias, the separation line/plane/hyperplane go through the origin (crossing point of the axes of feed values of every input) and they can’t separate input in almost all cases.

Linear Separation

A neuron with 2 weights makes linear separation with a line.

Planar Separation

A neuron with 3 weights makes planar separation with a plane.

Hyperplanar Separation

A neuron with 4 weights or more makes hyperplanar separation with a hyperplanar.

Separation by a Layer

A single neuron does linear/planar/hyperplanar separation.
A layer does poly-linear, poly-planar, poly-hyperplanar separation. The ‘poly’ part means it’s joining multiple separations by neurons in a sequence.

Separation by Multiple Layers

Multiple layers do multiple poly-linear, poly-planar, poly-hyperplanaer separations.


 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.