The activation function calculates what value a node should give as an output.
Which activation function should I chose? This depends, off course, on your model and what you want to achieve.
You want the activation function to have a linear or almost linear part since this is cheaper to compute. However, the non-linear parts is what gives the multi-layer network more power than a single layer network.
The following activation functions are available on the Peltarion Platform:
The classical illustration of a node is shown below, the input values and weights are first summed up with an bias and then run through an activation function to generate an output.