Tips to improve for beginners
- Target audience: Beginners
These tips are aimed for users who are just starting to use deep learning to solve their use cases. Of course, you should not skip these tips if you’re an intermediate user.
These suggestions are based on the assumption you are using the Experiment wizard and not building your own models.
First evaluation - A low Loss is good
To improve a model it’s good to know how evaluate it. Start with looking at the loss, a measure of how much error the model is committing. To optimize the model, aim for a:
Loss as close to 0 as possible, that is, make as few errors as possible.
We suggest that you run several experiments with different models and parameter settings according to this chapter before you dig deep into evaluation, but it’s always good to have a small hunch of what a good model is.
The loss is calculated from the difference between the model output and the provided label.
First tips to improve
Here are some useful adjustments that you can quickly make in the Modelling view of the platform.
Duplicate experiment and change Run settings
For any kind of problem you can change one of these settings in the Run settings section in the Modeling view.
The combination of number of epochs, batch size and learning rate may affect training results.
Larger batch size allows for an increase in the learning rate and might lower training time. However, you might need more epochs to reach the same results as with a smaller batch size. On the other hand, consider reducing the learning rate when lowering the batch size.
Increase Patience in Early Stopping
How? Increase Patience (default is 5 epochs), or skip it completely and let the training run the full amount of epochs.
Early stopping is a feature that enables the training to be automatically stopped when a chosen metric has stopped improving.
Why? A larger Patience means that an experiment will wait longer before stopping an experiment, so you can achieve better results.
Adjust the batch size
How? Try to run your experiment with larger or smaller batch size. Commonly set sizes on our platform are 4, 8, 32, 64, 256, 512.
The batch size is the number of examples that are processed in each training iteration and after which your model parameters are updated. The batch size influence the ability of the model to learn.
Why? Changing batch size can help the optimization process.
Change the learning rate
How? Reasonable values for the learning rate range from 0.1 to 10^-5. The learning rate can be lower but it should never be higher than 1.
The learning rate is controlling the size of the update step along the gradient. With a small learning rate you can expect to make consistent but very small progress.
Why? A too low learning rate means that you might get stuck at a local minimum not reaching the global minima, that is, the best result. A too high learning rate mean that you might miss the lowest value. Read more on learning rate and optimizers here.
Increase the number of epochs
How? Increase Epochs (default is 100) and deselect Early Stopping.
An epoch is when all training data have run through the model once.
Why? You might get a better result if you let the experiment run for a longer time.
Duplicate experiment and change blocks
If you tried many combinations of settings and your results did not improve, try to change blocks in your model.
How and where? Click the three dots and click Duplicate. Remove the pretrained block and selecte a new one.