Machine-Bias vs Human Bias

6

October

2018

No ratings yet.

An often-cited concern with implementing AI decision-making is the “machine bias” learned from the datasets that were used to build the model.
Proposed solutions include multiple data sources with increased external validity and cleaning datasets. This will all help to improve AI models for decision-making, however doesn’t address the core of the issue.
The dangerous bias does not stem from a single human or from inaccurate data entry, it is the categorical bias that exists in our society as a whole. As such they are no more and no less dangerous, problematic and difficult to detect than bias in human decisions.
The implementation of AI happens to coincide with our heightened awareness of racial, gender, and other biases in humans and AI. It is right and necessary for us to be aware of their existence and to take action to battle them. It is however not a reason to discount machine decision-making. No, AI is not perfectly “neutral”, “fair”, or “objective”, but they are still MORE neutral, fair, and objective than individual humans. We cannot yet completely get rid of all the bias that goes into making models, but every bit counts. Every correction, every conscious mitigation improve the AI model and give it an advantage over the humans. Even without any correction the model is no more biased than the humans it learns from and gets rid of the short term individual fluctuations stemming from hunger, tiredness, moods and movies we recently watched.
Taking as example the sobering case researched by ProPublica which found that
“COMPAS, a machine learning algorithm used to determine criminal defendants’ likelihood to recommit crimes, was biased in how it made predictions. The algorithm is used by judges in over a dozen states to make decisions on pre-trial conditions, and sometimes, in actual sentencing.”
We are rightfully horrified and shocked at this. But people often are too eager to say we should not be using AI to make such important decisions. What we forget is, that the algorithm is only used “sometimes” in actual sentencing, whereas the human judgment that is “always” used formed the biased dataset in the first place and will not be any fairer. Having identified bias in the machine model, steps can be taken to adjust it. It might not be easy, but it is probably still much easier than re-educating every human judge, police officer, news reporter and worried mother teaching their children to stay out of the gentrified neighborhoods.

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://becominghuman.ai/how-to-prevent-bias-in-machine-learning-fbd9adf1198

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *