Help! The Computer is Racist!

24

October

2017

No ratings yet.

Artificial Intelligence (AI) has taken a major leap in the last decade with great advancements. Advancements that sometimes we are and sometimes are not aware of being confronted with on a daily basis. Such as ridesharing apps that optimally match you with other passengers to minimize detours and Facebook its ability to recognize faces on pictures you uploaded. (Narula 2017)

Machine Learning is a subfield of AI and encompasses the statistical arm. The main focus of machine learning is programming algorithms so that we can learn from data, complete tasks and make predictions (Schmidt 2016). Through machine learning we teach computers to independently recognize patterns in data. It is easy to think that computers will recognize such patterns without involving prejudices in coming up with these patterns.

A report, however, from Human Rights Data contradicts this. The report shows that the selection bias of ML systems has great, detrimental implications for society. The police in the United States use software that helps predict crime by using patterns from the past. There were higher crime rates in certain suburbs where there is a higher precedence of people of color so police would go on patrol there more often which led to more arrests of people of color. This created a negative-feedback loop, because of which the software become even more and more prejudiced about these suburbs. (Buranyi 2017)

Google urges users and developers to be aware of the fact that even though something is based on data it is not automatically neutral (Van Noort 2017). Gupta, research director of a Google ML-lab, explains that ML learns similarly like a child or a dog so when you feed them prejudices the computer will give you prejudices back (Van Noort 2017). A well-known example of ML giving back prejudices is Google’s own search engine. When one searched gorilla’s there would be pictures of people of color between the search results (Kasperkevic 2015). Yet another output of the Google search engine that has caused discontent was that when the search items for “CEO” only showed images of old white men (Hellman 2017). AI is not programmed by people but teaches itself what decisions it makes on the basis of large amounts of data. Such outcomes however are not preferable. This selection bias of the system is not preferable at all. They feed current prejudices and computers will become racist.

It is important to find a way through which we can integrate honesty and integrate in the ML systems. Is it possible to teach ML systems this? How do we ensure honesty in our judgments? Is this knowledge transferable or is it tacit and stuck in Polanyi’s Paradox. What do you think? Should more ethics specialists be involved in the ML adventure?

 

 

References

 

Buranyi, S. (2017). Rise of the racist robots – how AI is learning all our worst impulses. [online] the Guardian. Available at: https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses [Accessed 2 Oct. 2017].

Hellman, G. (2017). Truth in Pictures: What Google Image Searches Tell Us About Inequality at Work. [online] Code Like A Girl. Available at: https://code.likeagirl.io/truth-in-pictures-what-google-image-searches-tell-us-about-inequality-at-work-554583cfe99d [Accessed 3 Oct. 2017].

Kasperkevic, J. (2015). Google says sorry for racist auto-tag in photo app. [online] the Guardian. Available at: https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app [Accessed 2 Oct. 2017].

Narula, G. (2017). Everyday Examples of Artificial Intelligence and Machine Learning. [online] TechEmergence. Available at: https://www.techemergence.com/everyday-examples-of-ai/ [Accessed 3 Oct. 2017].

Schmidt, M. (2016). Clarifying the uses of artificial intelligence in the enterprise. [online] TechCrunch. Available at: https://techcrunch.com/2016/05/12/clarifying-the-uses-of-artificial-intelligence-in-the-enterprise/ [Accessed 2 Oct. 2017].

van Noort, W. (2017). De Computer is Racistisch. NRC. [online] Available at: https://www.nrc.nl/nieuws/2017/09/19/de-computer-is-racistisch-13070987-a1573906 [Accessed 1 Oct. 2017].

 

 

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *