Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • icon picker
        Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • Ref & Glossary

What is Neuralnet

A neuralnet is a graph of neurons which usually but not necessary come with multiple layers for feeding forward and backward. Neuralnet is also a function, the whole network is a function. Utilizing neuralnet in multiple layers is also called deep learning, deep means going thru’ layer to layer.

Why Neuralnet can Learn

A neuralnet is a graph of neurons with different types (different types of neurons) of calculation inside. The point is giving output with a certain input. In order to learn different cases of inputs, the params inside the network changes to adapt and give different results as outputs.
The param adaption process is called weight update, and it is the learning process.
Neuralnet can learn a huge number of cases because the combination of neurons thru’ connections between layers make the exponential amount of combinations.

Generalization

Neuralnet can group up inputs into a class, this is called generalisation.

Summarization

Neuralnet condense, reduce the amount of data from input to output and very small number of numerical values in output, this is called summarisation.

Fuzzy Logic Output

Neuralnet (except utilising perceptrons) never gives exact output, it is always a fuzzy result, eg. value probability of a class between 0 and 1, this is fuzzy logic output.

Learning Theories

The main 2 learning theories used in neural networks are Cauchy Gradient Descent (used in supervised learning) and Bellman Equation (used in reinforcement learning). Reinforcement learning can have supervised learning inside too.

Data Engineering

Before the data can be fed to the network, it needs a certain level of processing called data engineering. Data engineering is regular programming and logics to convert data.

More Neuralnets

Neuralnets vary in shape and size, the most common network is summarizer network with fewer and fewer params nearer to the output.

 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.