Classification loss metrics

Loss curve

The loss curve is shown just after a model has started training and has completed at least one epoch.

The curve marked Training shows the loss calculated on the training data for each epoch.

The curve marked Validation shows the loss calculated on the validation data for each epoch.

Exactly what the loss curve means depends on which of the loss functions you selected in the Modeling view.

Suggestions on how to improve

You want to have as low loss scores as possible and you want the training error to be slightly lower than test error

This table gives you a hint on what you can do if the model under- or overfits.

Underfitting Just right Overfitting


  • High train error

  • Training error close to test error

  • High bias

Training error slightly lower than test error

  • Very low training error

  • Training error much lower than test error

  • High variance

Loss curve

Underfitting Deep ­Learning

Just right Deep Learning_

Overfitting ­Deep Learning

Possible remedies

  • Create a more complex model

  • Add more features

  • Train longer

  • Perform regularization

  • Get more data

Example: Your model have a large discrepancy between training and validation losses, then you can try to introduce Dropout and/or Batch normalization blocks to improve generalization, where generalization is the the ability to correctly predict new, previously unseen data points.
For image classification, image augmentation may help in some cases.

The blog post Bias-Variance Tradeoff in Machine Learning is a nice summary of different things to try depending on your problem.