Deployment bias might occur when a model is used in a different way than it is intended to be used.
Example: When a skin lesion segmentation model that is developed with human skin data, is used for animal skin segmentation, the results are not good as it is expected.
How to prevent deployment bias
You can ask yourself:
Who will the model impact?
How will I use this model and what is my aim with this model?
What data did I train this model on and with what parameters?
Is this model going to be used to make a decision or support the decision system?