Exact match ratio

The exact match ratio is a strict metric of the model performance for multi-label problem types. The exact match ratio value is 1 when the model is perfect, and 0 when the model is very bad.

When is exact match ratio available

The intersection over union metric is available when the problem type of the model is multi-label. This happens for instance in:

  • Multi-label classification. The Build your own music critic tutorial shows a case where a single song might belong to one or more categories at once, such as Epic and Happy.

  • Image segmentation. The Skin cancer detection tutorial shows a case where a skin pathology might be visible in some (or none) of the pixels of an image.

In these cases, a model doesn’t simply make correct or incorrect predictions. Since many labels are possible at once, a model’s predictions are often partly correct, depending on how many class labels (or image pixels) are correctly or incorrectly identified.

What’s the exact match ratio

The exact match ratio is a very strict measure of the model performance. It increases only when the model correctly identifies every possible label that an example has, without any false positive.

Use this metric when it’s critical that a model gives correct predictions for all possible labels to be considered good.
If a prediction is useful to you when the model predicts most of the labels correctly, check the intersection over union metric instead.

The formula for the exact match ratio can be given in terms of positive and negative binary predictions:

\[\begin{array}{rcl} \text{Exact match ratio} & = & \dfrac{\text{Number of examples with exact label match}}{\text{Total number of examples}} \\ \end{array}\]

Suggestions on how to improve

Large discrepancy

If there is a large discrepancy between training and validation accuracy (called overfitting), try to introduce dropout and/or batch normalization blocks to improve generalization. Overfitting means that the model performs well when it’s shown a training example (resulting in a low training loss), but badly when it’s shown a new example it hasn’t seen before (resulting in a high validation loss).

A large discrepancy can also show that the validation data are too different from the training data. Then create a new split between training and validation subsets.

Low accuracy

If the training accuracy is low, the model is not learning well enough. Try to build a new model or collect more training data.

Was this page helpful?
YesNo