Why are algorithms racist and how artificial intelligence is reinforcing this?

9

October

2020

No ratings yet.

Artificial intelligence seems to be the solution to large worldwide problems, expensive tasks can be automatized, and it can help humans make better decisions based on more clear information (Davenport and Ronanki, 2018). However, the problems of artificial intelligence and even basic algorism are becoming more known in everyday news. In 2020 a Twitter algorithm that has the basic function of cropping images that are too large to fit in a tweet, cropped pictures in such a way that white people are displayed more often than people of color (Dans, 2020).

This phenomenon is not unique, more advanced algorithms (with integrated AI) have the same problem too. The Dutch tax authorities used algorithms to detect tax fraud of Dutch households. The algorithm disadvantaged people with double nationalities more than Dutch citizens. They were often more suspected of fraud, even if they did not commit fraudulent activities at all (ANP, 2020).

Algorithms have great power at making predictions and classify information, but only base their information on existing data. A computer cannot rationalize why they make certain decisions. Unlike humans, computers use training data to look for patterns and use them to make predictions. Therefore, computers cannot describe how they made certain decisions. Problems such as the cropping algorithm of Twitter can be caused by unbalanced data or data with sample bias, in which the algorithm does not have sufficient cases in which people of color are the main focus in a big picture. 

 

Algorithms don’t do a good job of detecting their own flaws” – Clay Shirky

 

As algorithms get smatter, we tend to have less control over them. Basic algorithms such as decision tree algorithms or linear regression algorithms are relatively easy to understand, therefore we can change them if necessary. For example, if the algorithm bases its decisions in unethical matters or has unethical outcomes. Modern algorithms have better performance but are harder to understand and to change. We know which variables have a bigger impact than others, but creators have little idea of how these variables affect the outcome. Highly known examples are support vector machines algorithms, random forest algorithms and k-nearest neighbor clustering algorithms. Right now, and in the near future, many algorithms become more powerful by introducing elements of AI and machine learning. These algorithms perform like black boxes, we have little idea which variables influence the outcome and how important they are. Therefore, identifying unethical aspects is hard, even not impossible (Heilweil, 2020). A well-known example is the deep neural networks algorithm.

 

References

ANP. 2020. Kamer geschokt door ‘harde conclusies’ over discriminatie fiscus. [Online]. [Accessed 8 October 2020]. Available from: https://www.trouw.nl/binnenland/kamer-geschokt-door-harde-conclusies-over-discriminatie-fiscus~b58b06e0/

Dans, E. 2020. Biased Algorithms: Does Anybody Believe Twitter Is Racist?. [Online]. [Accessed 8 October 2020]. Available from: https://www.forbes.com/sites/enriquedans/2020/10/03/biased-algorithms-does-anybody-believe-twitter-isracist/?ss=ai#dbf52e584665

Davenport, T.H. and Ronanki, R. 2020. Artificial Intelligence for the Real World. [Online]. [Accessed 8 October 2020]. Available from: https://hbr.org/2018/01/artificial-intelligence-for-the-real-world

Heilweil, R. 2020. Why algorithms can be racist and sexist. [Online]. [Accessed 8 October 2020]. Available from: https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

 

Please rate this

1 thought on “Why are algorithms racist and how artificial intelligence is reinforcing this?”

  1. Hi Jesse,

    I believe your post made extremely relevant points regarding a massive problem, which I believe will increase even further as we become more dependent on AI. When we feed data to AI, the information that is provided is often bias. This is especially seen when using information that is related to humans and predicting human behavior. Racism and sexism are very much prevalent in our societies. The historically racist and sexist choices made in hiring processes are often translated into data, which is used to create AI algorithms that only amplify this affect. This is increasingly dangerous as companies become more reliant on algorithms for picking applicants. However, it is not just job opportunities that may be taken from minorities. There have been cases, such as the one with COMPAS in a Wisconsin court, where the likelihood of reoffences was predicted through an AI program. The program flagged black convicts to be 45% more likely to reoffend in comparison to 24% of white convicts leading to longer sentences for black defendants. I believe that we need to ensure that our racial and gender biases do not translate to algorithms which create further inequalities, but rather have technological advances (such as AI) help get rid of our human ignorance (whether the biases are conscious or unconscious).

    Interesting read: https://towardsdatascience.com/racist-data-human-bias-is-infecting-ai-development-8110c1ec50c

Leave a Reply

Your email address will not be published. Required fields are marked *