The binary accuracy metric measures how often the model gets the prediction right.
Binary accuracy = 1, means the model’s predictions are perfect.
The formula for binary accuracy is:
Or in terms of positive and negative predictions:
TP = True positive (Actual positive is predicted positive)
TN = True negative (Actual negative is predicted negative)
FP = False positive (Actual negative is predicted positive)
FN = False negative (Actual positive is predicted negative)
Suggestions on how to improve
If there is a large discrepancy between training and validation accuracy (called overfitting), try to introduce dropout and/or batch normalization blocks to improve generalization. Overfitting means that the model performs well when it’s shown a training example (resulting in a low training loss), but badly when it’s shown a new example it hasn’t seen before (resulting in a high validation loss).
A large discrepancy can also show that the validation data are too different from the training data. Then create a new split between training and validation subsets.
If the training accuracy is low, the model is not learning well enough. Try to build a new model or collect more training data.