Binary accuracy

The binary accuracy metric measures how often the model gets the prediction right.

Binary accuracy = 1, means the model’s predictions are perfect.

The formula for binary accuracy is:

\[\begin{array}{rcl} \text{Accuracy} & = & \dfrac{\text{Number of correct predictions}}{\text{Total number of predictions}} \\ \end{array}\]

Or in terms of positive and negative predictions:

\[\begin{array}{rcl} \text{Accuracy} & = & \dfrac{\text{TP + TN}}{\text{TP + TN + FP + FN}} \end{array}\]

Where:
TP = True positive (Actual positive is predicted positive)
TN = True negative (Actual negative is predicted negative)
FP = False positive (Actual negative is predicted positive)
FN = False negative (Actual positive is predicted negative)

Suggestions on how to improve

Large discrepancy

If there is a large discrepancy between training and validation accuracy (called overfitting), try to introduce dropout and/or batch normalization blocks to improve generalization. Overfitting means that the model performs well when it’s shown a training example (resulting in a low training loss), but badly when it’s shown a new example it hasn’t seen before (resulting in a high validation loss).

A large discrepancy can also show that the validation data are too different from the training data. Then create a new split between training and validation subsets.

Low accuracy

If the training accuracy is low, the model is not learning well enough. Try to build a new model or collect more training data.

Was this page helpful?
YesNo