# Predictions inspection

While the Loss and metrics give you information about the model performance as a whole, inspecting the model’s output predictions example by example lets you:

• Control that your model output is what you would expect:
Does it have the right type (numeric value(s), image, categorical, etc.)? Does it look qualitatively correct given the input that the model received?

• Assess the accuracy of the predictions. It’s good to know that the average error of the model is 0.013, but what does it mean for each prediction? Compare individual predictions against their target, and get an idea that is both quantitative and meaningful.
Supporting charts like the confusion matrix and the scatter plot give a sense of the distribution of errors, and let you filter on specific examples.

• Discover anomalies and unexpected relationships. Observing and filtering examples interactively can give you important clues to improve the model’s performance: Do the misclassified examples actually have a wrong target? Does every image classified as bird happen to have a sky blue background?

## Select the subset and checkpoint to inspect

Figure 1. Select the dataset and model checkpoint you want to inspect. Predictions that have already been calculated are marked with a green check.

You can select which data subset you want to inspect.
This can be the training or validation subset, or any other custom subset, as long as it exists in the dataset version that the model is training on.

You can also choose which checkpoint of the model to calculate predictions with.
A checkpoint is automatically created every epoch. This allows you to inspect the predictions of the model at the first, the last, or indeed any epoch, so that you can see how training (and possibly overfitting) affects the predictions.

### Requesting predictions for inspection

Predictions are calculated automatically in the background when a new Best checkpoint occurs. You can also request predictions for any specific checkpoint by using the Inspect button.

When predictions have already been calculated for a specific subset and checkpoint, a green mark indicates that those predictions are available and will display faster.

## Select the label to inspect

In cases where your problem type is a multi-label classification problem, i.e., where an example may belong to several categorical classes at the same time, you can select the class Label that you want to inspect. The prediction table, the Confusion matrix, and the ROC Curve will show the performance of the model on the selected Label.

For example, in the Build your own music critic tutorial, you classify songs using multiple labels like Angry, Countryside, Dark, Epic, or Happy.
A model might be very good at predicting if a song is Happy or not, but have more trouble to determine if the song is also Epic. By inspecting predictions label by label, you get a clear picture of how the model is behaving.

## Predictions table

Individual predictions are displayed in a table form. The table shows, for each example, the actual feature that the model had as target, the prediction output by the model, and the feature(s) that went into the model input.

Depending on the problem type, the table may be supported by a confusion matrix or a scatter plot, showing the distribution of predictions for the whole subset. Select data on these charts and the table will be filtered to show only the selected examples.

Figure 2. Example of predictions table for categorical target features