Loss and metrics

Get an overview of the training progress.

Navigating the loss and metrics plot

One of the metrics is plotted as a function of training epoch. You can change the metric displayed by selecting one from the list. Some metrics are only compatible for some problem types, so the metrics available depend on your current experiment setup.

The plot shows the curves of the selected experiment in green. The five experiments in the project with the best results for the selected metric are also plotted, allowing comparison.
Some experiments might not be displayed if the loss function or metric selected can’t be calculated for them, for instance if they use a different target.

Loss ­curve ­with ­checkpoints

The Training curve shows the loss or metric calculated on the training subset.
The Validation curve shows the loss or metric calculated on the validation subset.

The circles mark the Best epoch. The best epoch is determined by the epoch that has the lowest validation loss, while displaying the least amount of overfitting.

How to read the loss curve

No matter which loss function is used by your model, smaller values of the loss mean less error. Therefore, the first thing to look for is that the loss should decrease as training goes over more and more epochs, to get as close to zero as possible.

However, more training epochs doesn’t always mean better results.
A typical situation is that the model may start to overfit the training data. That is, the model learns to match the training examples (sometimes to perfection) giving a very small training loss, while being incapable of generalizing to any unseen data, giving a higher validation loss.

In those cases, you may want to implement some strategy to reduce overfitting. You may also simply use the model at a checkpoint where both losses were equally good and before overfitting started to happen.
The Platform saves a Best checkpoint at such a time, that you can use when you deploy your model.


Underfitting Just right Overfitting


  • High train error

  • Training error close to test error

  • High bias

  • Training error slightly lower than test error

  • Very low training error

  • Training error much lower than test error

  • High variance

Loss curve

Underfitting Deep ­Learning

Just right Deep Learning_

Overfitting ­Deep Learning

Possible remedies

  • Create a more complex model

  • Add more features

  • Train longer

  • Perform regularization

  • Get more data

The blog post Bias-Variance Tradeoff in Machine Learning is a nice summary of different things to try depending on your problem.

Was this page helpful?