Gallery
Concise and Practical AI/ML
Share
Explore
Neuron

Activation Functions

Activation functions are for limiting output values, it is to avoid saturated output values into larger range which makes the network hard to learn which is while identify activation performs badly because it doesn’t limit anything.

Identity Activation

The identify activation function is
image.png
and it doesn’t change nor limit the value passed in by the nucleus, dot-product for example.

Unit-step Activation

Unit-step

f(x) =1 if x>=0, =0 otherwise

Half-maximum Unit-step

f(x) =1 if x>0, =.5 if x=0, =0 otherwise

Rectifier Activations

Rectifier activation function usually has a flat section and a rectified (erected) section in function diagram.

ReLU

Rectifier Linear Unit is the most common activation function, it is faster than sigmoid-like functions. It limits, throws one half of the output from nucleus away.

Leaky ReLU

Leaky ReLU is the ReLU function which is not flat on the negative side and it is rising up a bit to avoid vanishing values due to multiplications with zeros.

Sigmoid-like Activations

Sigmoid

Sigmoid is the most common S-shape activation function.

Logistic

Softmax

Softmax is a special case of logistic function.

Hyperbolic Tangent

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.