Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • icon picker
            Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

Types of Neurons

Sample Code

More: Convo (space), Recurrent (time)

Types of Neurons

All current neurons being used are multipolar and fully connected to dendrites from previous neurons (or partially connected, depends on different kinds of networks). The differences currently are only about the nucleus of neuron (dot product, recurrent, convo, etc.) and the activation function between nucleus and axon.

Basic Neuron

General use. Basic neuron learns cases. Basic neuron uses dot-product in nucleus.

Perceptron

Perceptron is the special case of basic neuron where the output is only zero or one.

Neuron Structure in ML

A neuron contains weights and bias (at dendrites), dot-product or other summarisation method (at nucleus), activation function in between nucleus and axon. Axon only transfer data, data are not changed there any more. See for the detailed process.

Recurrent Neuron

Sequence learning. Recurrent neuron learns sequences. Recurrent neuron uses recurrent algorithm in nucleus, with memory to remember previous values in time, and forgetting gate too.

Basic Recurrent Neuron

Basic recurrent neuron changes weight based on the sequence of input but it’s not effective and not popularly used.

LSTM Neuron

LSTM (long-short-term memory) neuron has weights called long-term memory and additional variables inside called short-term memory; it is effective to learn sequences. LSTM units learn slower than GRU units but good for large amount of data.

GRU Neuron

GRU (gated-recurrent unit) neuron has only weights and forgetting gate. GRU units learn faster than LSTM units but give bad outputs on large amount of data to learn.

Convolutional Neuron

Sequence or patterns. Convolutional neuron learns patterns. Convolutional neuron uses kernel and matrix multiplication to excite on specific matching patterns.

Convo1D

Convo1D units can learn patterns in sequence, eg. audio.

Convo2D

Convo2D units can learn patterns in images.

Convo3D

Convo3D units can learn patterns in videos.

Specific Design Neuron

Most types of neurons can learn linear logics due to the common dot-product mechanism. For example, dot-product neurons can learn addition but can not learn multiplication.

Dot-product Neuron

The dot-product formula for nucleus is:
x1w1 + x2w2 +...

Customised Dot-product Neuron

Mulneuron

The custom formula for nucleus is, this neuron can learn multiplication, so it’s called mulneuron:
x1w1 * x2w2 + x3w3 +...
The first plus in the dot-product formula is replaced by multiplication and thus this neuron can learn both addition and multiplication. The rest of plus signs are still plus.
When learning addition: Either w1 or w2 goes to zero
When learning multiplication: w3 to weights after go to zero, and both w1 & w2 turn into 1.
But note that, the network may go into explosion; hyperparams should be set carefully.

 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.