AI promises multiple benefits, so we are currently testing in what fields we can apply it. But sometimes we might have to slow down the implementation a bit.
One of the fields we are testing AI is in court. We already had an AI program that, based on vast amounts of data sets, would give a verdict. This verdict was compared to the verdict of the actual judge, and in 79% of cases the AI had the same verdict (Johnston, 2016). Even in some American courts, AI is already used to help decide if, and how long, someone should be jailed (Judges Now Using Artificial Intelligence to Rule on Prisoners , 2018). Again, the computer analyses thousands of previous cases and will base its verdict on measurements it has learned from previous cases.
The problem is that the AI is only basing his verdict by analysing previous cases. But where is the human aspect if we let a machine base its verdict on information of previous cases?
The mere purpose of a court system is to prevent someone from making the same mistake again. Actually, if the defendant shows remorse, a judge is tended to reduce the sentence. This reduction is because the judge can interpret the remorse as a sign the defendant will not make the same mistake again (van Doorn, 2013). Even though AI is making improvements on interpreting language, emotion and image recognition, those are still the fields a computer has the most problems with to interpret (Brynjolfsson & McAfee, 2017). Let that be exactly what a judge in a courtroom uses to assess if the defendant shows signs of remorse.
An even bigger problem with AI in court is how it bases a probability score about a defendant based on analysing previous cases (Judges Now Using Artificial Intelligence to Rule on Prisoners , 2018). This measuring has already been tried when Bayesian statistics was used in court. I explicitly write was used, since for example the English Court of Appeal banned the use of probability measurements like Bayesian statistics or the Sherlock Holmes doctrine. The problem with measuring with statistics is that the argument with the highest probability will be used as explanation, just because other arguments have a less high probability (Spiegelhalter, 2013). By this reasoning an unlikely explanation might be used as the leading explanation, because statistics say so. While we stopped with using those probability statistics in court we now introduce AI, which does the same in a more sophisticated manner.
Since AI gives in the most cases the same verdict as a judge AI will have a good use in court in the future. But until the moment that AI can evaluate the defendant as good as a judge, and that we found a way around the probability problem, we must leave the final verdict up to a human.
Brynjolfsson, E., & McAfee, A. (2017). The Business of Artificial Intelligence. Boston: Harvard Business School Publishing Corporation.
Johnston, C. (2016, 10 24). Artificial intelligence ‘judge’ developed by UCL computer scientists. Retrieved from The Guardian: https://www.theguardian.com/technology/2016/oct/24/artificial-intelligence-judge-university-college-london-computer-scientists
Judges Now Using Artificial Intelligence to Rule on Prisoners . (2018, 02 07). Retrieved from Learning English: https://learningenglish.voanews.com/a/ai-used-by-judges-to-rule-on-prisoners/4236134.html
Spiegelhalter, D. (2013, 02 25). Court of Appeal bans Bayesian probability (and Sherlock Holmes) . Retrieved from Understanding Uncertainty: https://understandinguncertainty.org/court-appeal-bans-bayesian-probability-and-sherlock-holmes
van Doorn, B. (2013, 08 15). Spijt betuigen in de rechtbank: ‘Als dader kan je het beter maar wel doen’ . Retrieved from Omroep Brabant: https://www.omroepbrabant.nl/nieuws/163061/Spijt-betuigen-in-de-rechtbank-Als-dader-kan-je-het-beter-maar-wel-doen