The activation function calculates what value a node should give as an output.
Which activation function should I chose? This depends, off course, on your model and what you want to achieve.
You want the activation function to have a linear or almost linear part since this is cheaper to compute. However, the non-linear parts is what gives the multi-layer network more power than a single layer network.
The following activation functions are available on the Peltarion Platform:
The classical illustration of a node is shown below, the input values and weights are first summed up with an bias and then run through an activation function to generate an output.
Stay in the know by signing up for occasional emails with tips, tricks, deep learning insights, product updates, event news and webinar invitations.
We promise not to spam you or share your email with any third party. You can change your preferences at any time. See our privacy policies.
Please check your email inbox account to confirm, set, or update your communication preferences.