Skip to content
[New] Concise and Practical AI/ML
  • Pages
    • Preface
    • Artificial Intelligence
      • Concepts
      • High-level Intelligence
    • Maths for ML
      • Calculus
      • Algebra
    • Machine Learning
      • History of ML
      • ML Models
        • ML Model is Better
        • How a Model Learns
        • Boosted vs Combinatory
      • Neuralnet
        • Neuron
          • Types of Neurons
        • Layers
        • Neuralnet Alphabet
        • Heuristic Hyperparams
      • Feedforward
        • Input Separation
      • Backprop
        • Activation Functions
        • Loss Functions
        • Gradient Descent
        • Optimizers
      • Design Techniques
        • Normalization
        • Regularization
          • Drop-out Technique
        • Concatenation
        • Overfitting & Underfitting
        • Explosion & Vanishing
      • Engineering Techniques
    • Methods of ML
      • Supervised Learning
        • Regression
        • Classification
      • Reinforcement Learning
        • Concepts
        • Bellman Equation
        • Q-table
        • Q-network
        • Learning Tactics
          • Policy Network
      • Unsupervised Learning
        • Some Applications
      • Other Methods
    • Practical Cases
    • icon picker
      Ref & Glossary

Ref & Glossary

Books

Deep Learning

Libraries

Backend for training
High-level APIs utilising backend
HuggingFace
High-level Toolset for training
Trained model cloud storage

Tools

Models

Kaggle
Trained model cloud storage
Servers with GPU

Others

Communities

Blogs

Glossary

Neuron — A math-based neuron simulate a little bit of bio-neuron
Layer — A chain of neurons usually function with the same context (input for output of previous layer)
Q-learning — A method of Reinforcement Learning using Bellman q-update (based on Bellman equation), Q-learning can use either q-table or q-network
Q-table — The original knowledge store used in q-learning, huge memory needed
Q-network — The modern knowledge store (params) in q-learning, very much memory efficient, as it learns by param combination instead of slots for all cases
Q-update — The formula for updating Q-value based on Bellman equation, Q += Rate x T, where T is temporal difference.
Q-value — Estimated sum of rewards from a state and taking a specific action
Q-function — The function which returns q-value, can be the q-table or q-network
Policy Network — A network in RL which returns suggested action instead of q-value
AI — Artificial Intelligence, with learning or no learning
ML — Machine Learning, the process of learning new things in AI
DL — Deep Learning, the method of ML using artificial neural network (ANN, neuralnet)
Bias — The param associated with each neuron to shift the separation line defined by weights
Weight — The param in a neuron for modifying input to neuron, and together with other weights and biases it makes the combination to comprehend left-most input
Loss Function — A convex function (parabola pointing downward) to minimize
 
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.