History
Some names to mention: Newton-Leibniz (calculus), Cauchy (gradient descent), Rosenblatt (perceptron), McCulloch-Pitts (neuron), Dreyfus (backpropagation with only chain rule), Linnainmaa (auto-differentiation), LeCun (convolutional, space), Schmidhuber (LSTM, time), Bellman (equation for q-learning), Watkins (q-learning).
Calculus
No doubts, everybody knows in school those are Newton and Leibniz.
Gradient Descent
Cauchy, a French mathematician found the Gradient Descent method to find local minimum of a function.
Perceptron and Neuron
The perceptron, which is the basic building block of the neural network, was invented by Frank Rosenblatt in January 1957 at Cornell Aeronautical Laboratory, Inc at Buffalo, New York.
Backpropagation
Feedforward (forward propagation) is the preparation step before backpropagation. Feedforward uses previous layer to calculate next layer but without optimisation, the backpropagation part is truly Dynamic Programming with using previous layer to calculate next layer together with optimising weight values thru’ weight updates. Rosenblatt got the idea about backpropagation to do error correction but didn’t succeed in implementing it. This algorithm was later invented by some guy but quite confusing who, see this Wikipedia article: Seppo Linnainmaa later on invented automatic-differentiation which makes the backpropagation completely calculatable by computer without any manual calculation for the derivatives of multiple activation functions and loss function in the network. Q-learning
Bellman and Chris Watkins made the foundation for q-learning and reinforcement learning.
Later Inventors
LSTM Cell
LSTM cells were invented by Jurgen Schmidhuber, a German computer scientist.
Convolutional Cell
Convolutional cells were invented by Yann LeCun, a French computer scientist.