Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • icon picker
          Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary
image.png
A neuron in machine learning is modelled after the multipolar neuron in biology. It has multiple inputs and a single output. Multipolar neurons are the most common neurons in the brain. Multipolar neurons in be found in any brain area with sensory neurons, interneurons, or motor neurons.

Multipolar Neuron

Terms in Neuron

Params

Both weights and biases are params, to make the combinations to match with different cases of inputs. Weights will make separation line, biases will shift the separation line away from origin.

Weights

Weights are the variables at all dendrites.

Bias

Bias is a special weight for each neuron, possibly imaginable as a weight in soma outside nucleus. It receives no input and thus its input is always constant 1.

Core Processing

Dot-product

Dot-product is used in basic neurons.

LSTM, Convo, and Others

Other types of neurons have different core processing, not dot-product.

Activation

The activation process usually utilises a limiter function called activation function.

 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.