New to Peltarion? Discover our deep learning Platform

A single deep learning platform to build and deploy your projects, even if you’re not an AI superstar.


Predictions inspection

While the Loss and metrics give you information about the model performance as a whole, inspecting the model’s output predictions example by example lets you:

  • Control that your model output is what you would expect:
    Does it have the right type (numeric value(s), image, categorical, etc.)? Does it look qualitatively correct given the input that the model received?

  • Assess the accuracy of the predictions. It’s good to know that the average error of the model is 0.013, but what does it mean for each prediction? Compare individual predictions against their target, and get an idea that is both quantitative and meaningful.
    Supporting charts like the confusion matrix and the scatter plot give a sense of the distribution of errors, and let you filter on specific examples.

  • Discover anomalies and unexpected relationships. Observing and filtering examples interactively can give you important clues to improve the model’s performance: Do the misclassified examples actually have a wrong target? Does every image classified as bird happen to have a sky blue background?

Select the subset and checkpoint to inspect

Select the dataset and model checkpoint for inspection
Figure 1. Select the dataset and model checkpoint you want to inspect. Predictions that have already been calculated are marked with a green check.

You can select which data subset you want to inspect.
This can be the training or validation subset, or any other custom subset, as long as it exists in the dataset version that the model is training on.

You can also choose which checkpoint of the model to calculate predictions with.
A checkpoint is automatically created every epoch. This allows you to inspect the predictions of the model at the first, the last, or indeed any epoch, so that you can see how training (and possibly overfitting) affects the predictions.

Requesting predictions for inspection

Predictions are only calculated if you click the Inspect button, or if no predictions are available at all.
This is done because calculating the predictions requires to run the specific model checkpoint on the entire data subset specified, which may take some time.

When predictions have already been calculated for a specific subset and checkpoint, a green mark indicates that those predictions are available faster.

Predictions table

Individual predictions are displayed in a table form. The table shows, for each example, the actual feature that the model had as target, the prediction output by the model, and the feature(s) that went into the model input.

Depending on the problem type, the table may be supported by a confusion matrix or a scatter plot, showing the distribution of predictions for the whole subset. Select data on these charts and the table will be filtered to show only the selected examples.

Inspection table for categorical features
Figure 2. Example of predictions table for categorical target features