Binary error

The binary error metric measures how often the model gets the prediction wrong. Since it should be decreasing with training, it is convenient to use with log scaling.

In a binary classification problem, we consider a prediction to be wrong when the positive class gets a score lower than the threshold.

Binary error = 0, means the model’s predictions are perfect.

The formula for binary error is:

\[\begin{array}{rcl} \text{Binary error} & = & 1 - \text{Binary accuracy} \\ & & \\ & = & 1 - \dfrac{\text{Number of correct predictions}}{\text{Total number of predictions}} \\ \end{array}\]

It’s the complement of the binary accuracy.

Suggestions on how to improve

Large discrepancy

If there is a large discrepancy between training and validation binary error (called overfitting), try to introduce dropout and/or batch normalization blocks to improve generalization. Overfitting means that the model performs well when it’s shown a training example (resulting in a low training loss), but badly when it’s shown a new example it hasn’t seen before (resulting in a high validation loss).

A large discrepancy can also show that the validation data is too different from the training data.

High error

If the training binary error is high, the model is not learning well enough. Try to build a new model or collect more training data.

Was this page helpful?
YesNo