Hard sigmoid is a generalization of the sigmoid activation function. Since it’s a generalization and almost linear it’s much faster and cheaper to compute than an ordinary sigmoid activation function.
Hard sigmoid is the default recurrent activation for LSTM models.
Stay in the know by signing up for occasional emails with tips, tricks, deep learning insights, product updates, event news and webinar invitations.
We promise not to spam you or share your email with any third party. You can change your preferences at any time. See our privacy policies.
Please check your email inbox account to confirm, set, or update your communication preferences.