Are computers becoming racist?

25

September

2017

5/5 (5)

Recently, Google launched a warning about machine learning (ML), a subfield of artificial intelligence which gives computers the ability to learn, without being explicitly programmed. Google shows that machine learning teaches computers to independently discover patterns in data. It is likely to assume that this does not come with any prejudices, but the fact that something is based on data, does not make it neutral. ML learns the same way a child or dog learns, and when you teach them prejudices, this will get back to you eventually (Hannah Devlin, 2017).

ML makes independent decisions based on patterns in huge amounts of data, which as a result causes patterns from the past to be continued or even reinforced. As an example, when entering ‘CEO’ into Google Images, the majority of images will be white middle-aged men (Hannah Devlin, 2017). Companies that use ML realize the severity of this problem, but have not been able to find a proper solution yet.

Google blames these problems on biases that occur with machine learning (Wouter van Noort, 2017). Users make the system biased, which is known as the interaction-bias. For example, Google asked users in an experiment to draw shoes. Most people drew male-shoes, and as a result, the machine learning system did not recognize women-shoes as shoes. The selection-bias occurs when data that is used to train machine learning, contains a disproportional amount of people from a specific group. This teaches the system to be better in recognizing that specific group. This results in huge implications for predictive pooling; recruitment and selection using ML is occurring more and more recently.

Does this mean that machine learning is an unfair system based on biases? Not necessarily. When you leave decisions to be made by humans, they are far from perfect as well. Currently, companies try to develop a fairer system for self-learning computers. A ‘Partnership on Artificial Intelligence’ is set up in which big ML companies, such as Google, Apple and Facebook unite to deal with ML and artificial intelligence related problems (Wouter van Noort, 2017). However, a lot of systems are developed in big technology companies that are not completely transparent. Although they say they are taking ethics seriously, often the company does not employ people specialized in ethics.

To conclude, machine learning can be very useful in analyzing data and its use should not be underestimated. However, it is always good to keep a human eye on the results.

Sources

Devlin, H. (2017) ‘AI programs exhibit racial and gender biases, research reveals’, The Guardian, 13th of april 2017, https://www.theguardian.com/technology/2017/apr/13/ai programs-exhibit-racist-and-sexist-biases-research-reveals

Van Noort, W. (2017) ‘De computer is racistisch’, NRC, 19th of September 2017, https://www.nrc.nl/nieuws/2017/09/19/de-computer-is-racistisch-13070987-a1573906

 

Please rate this

1 thought on “Are computers becoming racist?”

  1. Thanks for bringing this issue to light, Nina. There is an increasing number of cases in which big data has proven to be a social mirror, reflecting the biases and inequalities we have in society. For example, there’s a case in which a programme was designed to pre-select candidates for a UK medical school. The deep learning algorithms caused the programme to negatively select against women and ethnic minority applicants. In the same way, when researchers from Bonston University asked an AI machine to complete the following sentence: “Man is to computer programmers as woman is to x”, the machine answered “homemaker”. In March 2016 Microsoft launched a Twitter chatbot named Tay. The robot was meant to start playful conversations, but in less than 24 hours it started using racist language and tweeting neo-Nazi propaganda. Tay’s adventures raise serious questions.
    Fortunately, there are ways to avoid racist algorithms. Johanna Burai, a graphic designer, created the World White Web project. She noticed that when searching for images on search machines, she would often exclusively find “white” examples. Try typing “hands” into Google and you will find mainly “white” hands. This motivated her to create a website that offers alternative images that can be used by content creators. By increasing the amount of “alternative” images, more of these pictures will be picked up by search engines, redressing the balance.
    These concern are a result of the lack of diversity within the tech industry. It’s necessary to create more diverse data sets to train AI machines with and to build algorithms which explain their decision making. Like you mentioned in your blog, the majority of systems is not completely transparent. The best practices can then be shared among software vendors.

    Bodkin, H. (2017) AI robots are sexist and racist, experts warn., http://www.telegraph.co.uk/news/2017/08/24/ai-robots-sexist-racist-experts-warn/
    Kleinmann, Z. (2017) Artificial intelligence: How to avoid racist algorithms.
    http://www.bbc.com/news/technology-39533308
    Vincent, J. (2016) Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Leave a Reply

Your email address will not be published. Required fields are marked *