AdamW

AdamW is a variant of the optimizer Adam that has an improved implementation of weight decay.

Using weight decay is a form of regularization to lower the chance of overfitting.

When to change optimizer & optimizer parameters

Once you have settled on the overall model structure but want to achieve an even better model it can be appropriate to test another optimizer.
This is classic hyper parameter fine tuning where you try and see what works best. Any of these optimizers may achieve superior results, though getting there can sometimes require a lot of tuning of other Run settings parameters, for example, learning rate.

Adjust parameters

To tweak the AdamW optimizer, you can adjust these parameters:


Learning rate

The learning rate is controlling the size of the update steps along the gradient. This parameter sets how much of the gradient you update with, where 1 = 100% but normally you set much smaller learning rate, e.g., 0.001.

In our rolling ball analogy, we’re calculating where the ball should roll next in discrete steps (not continuous). How long these discrete steps are is the learning rate.

Choosing a good learning rate is important when training a neural network. If the ball rolls carefully with a small learning rate we can expect to make consistent but very small progress (this corresponds to having a small learning rate). The risk though is that the ball gets stuck in a local minima not reaching the global minima.

Learning rate
Figure 1. Learning rate

Larger steps mean that the weights are changed more every iteration, so that they may reach their optimal value faster, but may also miss the exact optimum.
Smaller steps mean that the weights are changed less every iteration, so it may take more epochs to reach their optimal value, but they are less likely to miss optima of the loss function.

Learning rate scheduling allows you to use large steps during the first few epochs, then progressively reduce the step size as the weights come closer to their optimal value.

Weight decay

Weight decay is a form of regularization to lower the chance of overfitting.
Default: 0.001

Was this page helpful?
YesNo