Micro-precision

Micro-precision measures the precision of the aggregated contributions of all classes. It’s short for micro-averaged precision.

Precision = 1 means the model’s predictions are perfect, all samples classified as the positive class are truly positive.

Emphasis on common classes
Micro-averaging will put more emphasis on the common classes in the data set. This may be the preferred behavior for multi-label classification problems. Labels that are very rare in the dataset, e.g., a genre that only represents 0.01% of the data examples, shouldn’t influence the overall precision metric heavily if the model is performing well on the other more common genres.

Precision

Precision is a metric used in binary classification problems to answer the following question: What proportion of positive predictions was actually correct?

Precision is defined as:

\[\text{Precision} = \frac{\text{True positive}}{\text{True positive} + \text{False positive}}\]

Where
True positive is when actual positive is predicted positive, and
False positive is when actual negative is predicted positive.

Note that you can always check the precision for each individual class in the Confusion matrix on the Evaluation view.

Micro-averaging

Micro-averaging is used when a problem has more than 2 labels that can be true, for example, in our tutorial Build your own music critic.

Micro-averaging is performed by first calculating the sum of all true positives and false positives, over all the classes. Then we compute the precision for the sums.

Micro-precision values can be high even if the model is performing very poorly on a rare class since it gives more weight to the common classes.

For single-label multi-class problems, micro-averaging would result in precision being exactly the same as accuracy. That does not provide any additional information about the model’s performance.

Example:
Let’s imagine you have a multi-class classification problem with 3 classes (A, B, C). The first step is to calculate how many True positives (TP) and False positives (FP) we have for each class:

A: 2 TP and 8 FP
B: 1 TP and 5 FP
C: 1 TP and 3 FP

Then we aggregate all classes:

TPsum: 2 + 1 + 1 = 4
FPsum: 8 + 5 + 3 = 16

And finally we calculate the precision of the aggregated values:

\[\text{Micro-precision} = \frac{TP_{sum}}{TP_{sum} + FP_{sum}} = \frac{\text{4}}{\text{20}} = \text{0.2}\]
Was this page helpful?
YesNo