Everyone knows the automatic soap dispenser, no need to press a button, the soap will automatically fall in your hand. However, few people know that these dispensers have problems in detecting skin color. A Facebook employee tweeted a video showing the soap dispenser only worked for white skin.
This is one of the many examples where AI exhibits algorithmic bias. We can find bias in a lot of places, from the university rankings to recruiting algorithms. Cathy O’Neill calls the models that show this kind of bias, weapons of mathematical destruction. The common aspect in these kind of models is their dangerous feedback loops. Take for example people in poorer neighbourhoods, they tend to have higher debts and higher crime rates in their surroundings. Therefore, they get to see more loan ads, police are watching them closer and they tend to get higher sentences. All these data is fed back to the algorithm which labels these people as high risk.
So, do we need to conclude from this that we shouldn’t replace work previously done by people with AI applications? No. The above examples show that AI tend to get misused unintentionally and are not a solution to everything, but certainly AI has (still) a lot of potential. Data scientist should be aware of the potential bias in algorithms Besides, regulation should be stricter as the outcomes of these applications become more important.
However, the next question that arises: how to regulate the mathematical models that play more and more of a role in our lives? Regulation could emerge in the form of auditing. As the company’s internal finances – which statements are issues for auditing – appear as black boxes to outsiders, algorithms are alike. Therefore, algorithms should be treated equal and be audited to restore faith in their potential.
Sources:
https://www.dailymail.co.uk/sciencetech/article-4800234/Is-soap-dispenser-RACIST.html
https://hbr.org/2018/11/why-we-need-to-audit-algorithms
Cathy O’Neill (2016), Weapons of Math Destruction