Artificial intelligence (AI) is a technology that exists since 1956 and that is being used more and more in our daily life. For example, most of us already tried a voice recognition feature of a phone via a voice assistant, such as Siri or Alexa. The opportunities that these technologies are providing us with are huge and have the potential to revolutionize our daily life. AI has an increasing impact on our choices and its starting to be used in important decision steps, such as in the personnel recruiting processes or in medical predictions.
These positive aspects can bring the human knowledge to a superior level of development. Unfortunately, in the same time, AI is also posing important threats to our society. One of the issues is that machines can be biased or even discriminatory. The most famous example is Tay, developed by Microsoft. Tay, a chatbot services on twitter that within a few hours became a Hitler lovers. Another example comes from a beauty contest judged by a machine in which the winners were almost all white. More importantly, ProPublica found that Campas, an engine used to determine the eligibility for parole in the American justice system, was discriminatory towards African Americans. The system is almost twice as likely to label black defendants as potential repeat offenders.
All these results can be explained by our actual reality. In fact, AI is trained with data and data can at times present a distorted reality. For example, because the statistics show a higher percentage of Afro-Americans being convicted in the USA, the AI system will judge Afro-Americans potentially being more likely to repeat a crime and in this way it will deny them more often parole. Another example of potential bias comes from facial recognition systems. If a system of facial recognition is trained with pictures of white people, the system will be better at recognizing people from this race. In some areas of USA, the Police surveillance cameras are 5 to 10% less accurate in identifying African Americans than Caucasians. On contrary, a similar system made by Ease Casio better manages to better recognize East Asians than Caucasians. Furthermore, imagine also an AI machine being responsible of hiring decisions within a company. If the system is trained with the data from the successful employees of the company, the system will reject the people not fitting to the characteristic of these employees. Amazon created such a system, but then the system ended up choosing most of the time white men.
Looking all these examples, it seems that AI is sometimes condemned to be biased because, in many cases, it is built and trained with biased data. To overcome this problem, in developing an AI algorithm, a company should very carefully inspect the inputs that generate the final AI product and try always to diversify the origins of the input data. Moreover it would be useful to also consider the creation of a supervisory committee that can constantly check and update the AI, in order to make sure that there are no discriminatory decisions being taken or to correct for them in case they occur.
While at first sight this solution seems to be relatively easy to implement, when zooming into the problem, the solution doesn’t appear anymore to be so easily implementable. This is because AI is becoming increasingly complex and the processes that form the final decisions are becoming harder to understand. Let`s take the example of Deep Patient, an AI that anticipate disease for patients of a hospital. This AI appears to be a better predictor than physicians. The AI could anticipate psychiatric disorders like schizophrenia and yet these illnesses are very hard for the physicians to predict. While this is good news, no one really knows how and why. Consequently, before letting AI decide the inmates` paroles or our medical prediction, we should try to make sure that we understand how these systems work and that they are trained to be unbiased.
References
Knight, W. (2017). Google’s AI chief says forget Elon Musk’s killer robots, and worry about bias in AI systems instead. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ [Accessed 17 Oct. 2019].
Knight, W. (2017). There’s a big problem with AI: even its creators can’t explain how it works. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [Accessed 17 Oct. 2019].
Plomion, B. (2017). Council Post: Does Artificial Intelligence Discriminate?. [online] Forbes.com. Available at: https://www.forbes.com/sites/forbescommunicationscouncil/2017/05/02/does-artificial-intelligence-discriminate/#1c6c70a030bc [Accessed 17 Oct. 2019].
NowThisOriginals. (2019). Why Developing Ethical, Unbiased AI Is Complicated. [online] Available at: https://nowthisoriginals.com/videos/future/why-developing-ethical-unbiased-ai-is-complicated [Accessed 17 Oct. 2019].
Hi Clarisse, thanks for your post. It is an interesting subject! A lot of articles have been written about bias in AI, and the thing which always surprises me is that it looks like AI is doing something which people do not do. However, I think that AI is very similar to people as we also have this bias. And since AI is based on data, this bias is caused just by facts. The fact that the AI application of Amazon ended up hiring white men, is probably just because of the fact that most successful current employees also are white men. So there is nothing to ‘blame’ AI for. The difference however between AI and people, is that we people can realize that we are biased, while AI can not. Therefore, I do think that it is important to always evaluate the decisions of AI and this is where the human factor is still important. This might in turn lead to decisions which take into account not just the past (as AI does), but also the future. This might for example contribute to more diverse workforces.