The confusion matrix appears when an experiment solves a classification problem.
If a model is good the diagonal cells have the highest count.
Every column represents an actual category that an example belongs to, as defined by the target feature of that example.
Every row represents a category predicted by the model, as defined by the category with the highest probability.
All the examples that have the same particular combination of actual and predicted categories fall within the same cell, increasing the count of that cell.
If a model is good, most of the examples should fall in the diagonal cells, which correspond to identical actual and predicted categories, that is, correct predictions. All other cells represent errors in the prediction.
Interact with the confusion matrix
You can click on each cell of the confusion matrix. This will filter the predictions table to show only the examples that fall in this cell.
Cells: Count displays exactly how many examples fall into a given cell of the confusion matrix.
Percentage normalizes the count so that rows add up to 100%. That is, each cell value shows the percentage of examples from the actual category that fall in the predicted category.
How to improve classification results
Many examples falling evenly over the off-diagonal indicate that the model has difficulties distinguishing categories.
You may try to use a larger model with more parameters.
If many examples fall into the same cell, the same row, or the same column, it indicates a systematic bias of the model towards the class involved.
This may be caused by a problem in the dataset, in the training/validation subset split, or in the model design.
Inspect the misclassified examples to find clues about the problem.
For a multi-dimensional target, each value in the confusion matrix corresponds to an element of the output tensor. This means that the total number of values in the confusion matrix may be higher than the number of examples in the dataset.