Loss and metrics

Get an overview of the training progress.

Navigating the loss and metrics plot

All the experiments of the project that use the same loss function are plotted on the graph, allowing comparison. The selected experiment is the only one shown in color.

You can choose whether you want the plot to show the loss or any other metric by clicking on its name above the plot. Some metrics are only compatible for some problem types, so the metrics available depend on your current experiment setup.

Loss ­curve ­with ­checkpoints

The Training curve shows the loss or metric calculated on the training subset.
The Validation curve shows the loss or metric calculated on the validation subset.

The circles mark the Best and Last epoch. The best epoch is determined by the epoch that has the lowest validation loss, while displaying the least amount of overfitting.

How to read the loss curve

No matter which loss function is used by your model, smaller values of the loss mean less error. Therefore, the first thing to look for is that the loss should decrease as training goes over more and more epochs, to get as close to zero as possible.

However, more training epochs doesn’t always mean better results.
A typical situation is that the model may start to overfit the training data. That is, the model learns to match the training examples (sometimes to perfection) giving a very small training loss, while being incapable of generalizing to any unseen data, giving a higher validation loss.

In those cases, you may want to implement some strategy to reduce overfitting. You may also simply use the model at a checkpoint where both losses were equally good and before overfitting started to happen.
The Platform saves a Best checkpoint at such a time, that you can use when you deploy your model.


Underfitting Just right Overfitting


  • High train error

  • Training error close to test error

  • High bias

  • Training error slightly lower than test error

  • Very low training error

  • Training error much lower than test error

  • High variance

Loss curve

Underfitting Deep ­Learning

Just right Deep Learning_

Overfitting ­Deep Learning

Possible remedies

  • Create a more complex model

  • Add more features

  • Train longer

  • Perform regularization

  • Get more data

The blog post Bias-Variance Tradeoff in Machine Learning is a nice summary of different things to try depending on your problem.