The unjust AI

19

October

2018

No ratings yet.

AI promises multiple benefits, so we are currently testing in what fields we can apply it. But sometimes we might have to slow down the implementation a bit.

One of the fields we are testing AI is in court. We already had an AI program that, based on vast amounts of data sets, would give a verdict. This verdict was compared to the verdict of the actual judge, and in 79% of cases the AI had the same verdict (Johnston, 2016). Even in some American courts, AI is already used to help decide if, and how long, someone should be jailed (Judges Now Using Artificial Intelligence to Rule on Prisoners , 2018). Again, the computer analyses thousands of previous cases and will base its verdict on measurements it has learned from previous cases.

The problem is that the AI is only basing his verdict by analysing previous cases. But where is the human aspect if we let a machine base its verdict on information of previous cases?

The mere purpose of a court system is to prevent someone from making the same mistake again. Actually, if the defendant shows remorse, a judge is tended to reduce the sentence. This reduction is because the judge can interpret the remorse as a sign the defendant will not make the same mistake again (van Doorn, 2013). Even though AI is making improvements on interpreting language, emotion and image recognition, those are still the fields a computer has the most problems with to interpret (Brynjolfsson & McAfee, 2017). Let that be exactly what a judge in a courtroom uses to assess if the defendant shows signs of remorse.

An even bigger problem with AI in court is how it bases a probability score about a defendant based on analysing previous cases (Judges Now Using Artificial Intelligence to Rule on Prisoners , 2018). This measuring has already been tried when Bayesian statistics was used in court. I explicitly write was used, since for example the English Court of Appeal banned the use of probability measurements like Bayesian statistics or the Sherlock Holmes doctrine. The problem with measuring with statistics is that the argument with the highest probability will be used as explanation, just because other arguments have a less high probability (Spiegelhalter, 2013). By this reasoning an unlikely explanation might be used as the leading explanation, because statistics say so. While we stopped with using those probability statistics in court we now introduce AI, which does the same in a more sophisticated manner.

Since AI gives in the most cases the same verdict as a judge AI will have a good use in court in the future. But until the moment that AI can evaluate the defendant as good as a judge, and that we found a way around the probability problem, we must leave the final verdict up to a human.

Brynjolfsson, E., & McAfee, A. (2017). The Business of Artificial Intelligence. Boston: Harvard Business School Publishing Corporation.
Johnston, C. (2016, 10 24). Artificial intelligence ‘judge’ developed by UCL computer scientists. Retrieved from The Guardian: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists
Judges Now Using Artificial Intelligence to Rule on Prisoners . (2018, 02 07). Retrieved from Learning English: https://learningenglish.voanews.com/a/ai-used-by-judges-to-rule-on-prisoners/4236134.html
Spiegelhalter, D. (2013, 02 25). Court of Appeal bans Bayesian probability (and Sherlock Holmes) . Retrieved from Understanding Uncertainty: https://understandinguncertainty.org/court-appeal-bans-bayesian-probability-and-sherlock-holmes
van Doorn, B. (2013, 08 15). Spijt betuigen in de rechtbank: ‘Als dader kan je het beter maar wel doen’ . Retrieved from Omroep Brabant: https://www.omroepbrabant.nl/nieuws/163061/Spijt-betuigen-in-de-rechtbank-Als-dader-kan-je-het-beter-maar-wel-doen

Please rate this

AI: what we are programmed to fear

19

October

2018

5/5 (1)

To know what scares us we will first define AI. AI, short for artificial intelligence, is a very broad term which actually is not strictly defined. The easiest description is “intelligence demonstrated by machines” (Wikipedia, 2018). Within this broad definition there are three separations: narrow AI, general AI and super AI. The difference is that narrow AI is made to perform a single task, general AI understands and reasons with its environment as a human would, and super AI is smarter than humans in basically everything (Dickson, 2017). Currently the most development is in machine learning, which is a part of narrow AI. Machine learning refers to the process where a machine improves his own performance on a specific task, without humans explaining exactly how to accomplish that task (Brynjolfsson & McAfee, 2017). With machine learning we know the outcome, but we do not understand how the machine came to the answer. The learning process is mostly unknown to us.

We humans fear the unknown. Fear of the unknown can be described at its very basic “The perceived absence of information at any level of consciousness” (Carleton, 2016). Fear of the unknown is a fundamental fear, meaning that it is an emotion, continuously and normally distributed in the population, evolutionarily supported and irreducible (Carleton, Fear of the unknown: One fear to rule them all?, 2016).

And there we have our AI, solving tasks that we do not anymore understand how it has learned to solve them. Black boxes with data sets going that are too big for us to understand. We do not know anymore how that program has learned. As long as it works on task that we have given the AI we can handle it. But the moment that computers behave in a way that we did not predicted, that is the moment it gets scary for us. This is best illustrated by the two AI programs Facebook had, these programs were challenged to negotiate with each other over trade. Besides negotiating in the English language, the chatbots appeared to have developed an underlying language to communicate with each other (Griffin, 2017).

In the example of Facebook, the machines were only working on negotiating over trade. But this unknown behaviour already spikes our imagination. When we get to general AI the program will be able to apply its learning to other situations. Now think of the fact that we already have computers which, when working together, win war games from the best human players (Vincent, 2018). And what might happen if those computers start to have unknown behaviour.

Brynjolfsson, E., & McAfee, A. (2017). The Business of Artificial Intelligence. Boston: Harvard Business School Publishing Corporation.
Carleton, R. N. (2016). Fear of the unknown: One fear to rule them all? Regina: University of Regina.
Carleton, R. N. (2016). Into the Unknown: a review and synthesis of contemporary models involving uncertainty. Regina: University of Regina.
Dickson, B. (2017, 05 12). What is Narrow, General and Super Artificial Intelligence. Opgehaald van Techtalks: https://bdtechtalks.com/2017/05/12/what-is-narrow-general-and-super-artificial-intelligence/
Griffin, A. (2017, 07 31). Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language. Opgehaald van Independent: https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html
Vincent, J. (2018, 06 25). AI bots trained for 180 years a day to beat humans at Dota 2. Opgehaald van The Verge: https://www.theverge.com/2018/6/25/17492918/openai-dota-2-bot-ai-five-5v5-matches
Wikipedia. (2018, 10 12). Artificial intelligence. Opgehaald van Wikipedia: https://en.wikipedia.org/wiki/Artificial_intelligence

Please rate this