Test it on the Peltarion Platform
A platform to build and deploy deep learning projects.
Even if you’re not an AI superstar.
Mean squared logarithmic error (MSLE)
Mean squared logarithmic error (MSLE) can be interpreted as a measure of the ratio between the true and predicted values.
Mean squared logarithmic error is, as the name suggests, a variation of the Mean Squared Error.
MSLE only care about the percentual difference
The introduction of the logarithm makes MSLE only care about the relative difference between the true and the predicted value, or in other words, it only cares about the percentual difference between them.
This means that MSLE will treat small differences between small true and predicted values approximately the same as big differences between large true and predicted values.
True value | Predicted value | MSE loss | MSLE loss |
---|---|---|---|
30 |
20 |
100 |
0.02861 |
30000 |
20000 |
100 000 000 |
0.03100 |
Comment |
big difference |
small difference |
MSLE penalizes underestimates more than overestimates
MSLE also penalizes underestimates more than overestimates, introducing an asymmetry in the error curve.
True value | Predicted value | MSE loss | MSLE loss |
---|---|---|---|
20 |
10 |
100 |
0.07886 |
20 |
30 |
100 |
0.02861 |
Comment |
no difference |
big difference |
When to use Mean squared logarithmic error
Use MSLE when doing regression, believing that your target, conditioned on the input, is normally distributed, and you don’t want large errors to be significantly more penalized than small ones, in those cases where the range of the target value is large.
Example: You want to predict future house prices, and your dataset includes homes that are orders of magnitude different in price. The price is a continuous value, and therefore, we want to do regression. MSLE can here be used as the loss function.
MSLE math
The loss is the mean over the seen data of the squared differences between the log-transformed true and predicted values, or writing it as a formula:
where ŷ is the predicted value.
This loss can be interpreted as a measure of the ratio between the true and predicted values, since:
The reason ‘1’ is added to both y and ŷ is for mathematical convenience since log(0)is not defined but both y or ŷ can be 0.
Further reading
In the tutorial Predict California house prices we use Mean Squared Error as loss funtion, but you can just as well in this case use MSLE.
We also have a topic about Mean squared logarithmic error (MSLE) in our Glossary.
Other loss functions
MSLE is one of several loss functions you can use on the Platform. Other examples are: