Macro-precision

Macro-precision measures the average precision per class. It’s short for macro-averaged precision.

Precision = 1 means the model’s predictions are perfect, all samples classified as the positive class are truly positive.

All classes treated equally
Macro-precision will be low for models that only perform well on the common classes while performing poorly on the rare classes. It’s, therefore, a complementary metric to the overall accuracy.

Precision

Precision is a metric used in binary classification problems to answer the following question: What proportion of positive predictions was actually correct?

Precision is defined as:

\[\text{Precision} = \frac{\text{True positive}}{\text{True positive} + \text{False positive}}\]

Where
True positive is when actual positive is predicted positive, and
False positive is when actual negative is predicted positive.

Note that you can always check the precision for each individual class in the Confusion matrix on the Evaluation view.

Macro-averaging

Macro-averaging is used for models with more than 2 target classes, for example, in our tutorial Self sorting wardrobe.

Macro-averaging is performed by first computing the precision of each class, and then taking the average of all precisions.

When macro-averaging, all classes contribute equally regardless of how often they appear in the dataset.

Macro-averaging is the default aggregation method for precision for single-label multi-class problems.

Example:
Let’s imagine you have a multi-class classification problem with 3 classes (A, B, C). The first step is to calculate how many True positives (TP) and False positives (FP) we have for each class:

A: 2 TP and 8 FP
B: 1 TP and 1 FP
C: 1 TP and 1 FP

Then we calculate the precision for each class:

PA = 0.2
PB = 0.5
PC = 0.5

And finally we average them:

\[\text{Macro-precision} = \frac{P_A + P_B + P_C}{\text{Number of classes}} = \frac{\text{0.2+0.5+0.5}}{\text{3}} = \text{0.4}\]
Was this page helpful?
YesNo