A few years ago, I watched Person of Interest – a TV series in which Artificial Intelligence (AI) is used to analyse data from cameras, computers, and other electronic devices, making it possible to predict and prevent crimes before they even happen. At the time, I never thought something like that was possible but well, that has changed. Even though AI technologies available today are nowhere as sophisticated as in the show, experts affirm that there have been some serious progress in that direction.
According to Dr. Simon See – director for NVDIA AI technology center – “AI can predict the probability of crime in a location by detecting anomalies and faces”. This is exactly what China is aiming to do. Cloud Walk, a company located in Guangzhou Tianhe Software Park, combines its facial recognition software and big data analysis tools to track people’s location and behaviour in order to assess the odds of them committing a crime. Suspicious behaviours, such as frequent visits to gun shops or transportation centres (a good target location for terrorists), are flagged, and a warning is forwarded to the local police. The law enforcement forces can then intervene before the crime even happens.
In addition, in Durham, England, the law enforcement forces are making use of HART – an AI system developed by a Professor of Cambridge University – to help them determine whether a suspect should be released or keep in custody. HART uses the police’s database to forecast the risk of a suspect re-offending by putting them in either a low, moderate or high-risk category. The police can then decide on the appropriate course of action based on the ranking. Although the system is not yet ready to be deployed on a large scale, the tests conducted in Durham are quite encouraging as only 2% of low-risk suspects went on to commit a serious offense. Similarly, in the US, law courts and correction departments are making use of AI to help them pass judgement. Similarly to HART, the system determines the likelihood of the defendant committing another offense or whether he’s likely to appear to his court date. Based on the output, the court can then make decisions about bail, sentencing, and parole.
After reading a few articles to write the present post, I immediately thought that using AI to reduce criminal offenses seemed to be an amazing idea to – reduced criminality, terrorist attacks prevented, less “detective work” for the law enforcement forces, what else could we want? However, after further considerations, I believe that even if AI might be able to prevent some crimes in the ways mentioned previously, it also presents several issues.
First, whoever does the design and coding brings his own beliefs, biases, misunderstandings, and, most crucially, prejudices to the party. As long as this issue is not fixed, should we really trust a man’s freedom with a machine that might contain hidden biases rather than a jury composed of random people from different backgrounds?
Second, it is important not to forget that the law enforcement forces are not the only ones making use of IT, criminals also do. Thus, although AI might prove useful in reducing criminality, it also poses new threats to security, and as long as we don’t find ways to counter these, I wouldn’t trust my life to an auto-driven car or the likes.
And you, what do you think about AI as a way to reduce criminality & passing judgements? Let me know in the comments.
References:
- Gibbs, M. (2017, February 25). Pre-crime, algorithms, artificial intelligence, and ethics. Retrieved from https://www.networkworld.com/article/3174331/big-data-business-intelligence/pre-crime-algorithms-artificial-intelligence-and-ethics.html
- Hamill, K. D. (2017, May 11). British cops test Minority Report-style system to stop crimes before they happen. Retrieved from https://www.thesun.co.uk/tech/3536544/british-cops-test-minority-report-style-system-to-stop-crimes-before-they-happen/
- Markou, C. (2017, May 16). Why using AI to sentence criminals is a dangerous idea. Retrieved from https://phys.org/news/2017-05-ai-sentence-criminals-dangerous-idea.html
- Yang, Y., Yang, Y., & Ju, S. F. (2017, July 23). China seeks glimpse of citizens’ future with crime-predicting AI. Retrieved from https://www.ft.com/content/5ec7093c-6e06-11e7-b9c7-15af748b60d0
Thank you very much for this blog post. It reminded me of this movie Minority Report where crimes are predicted before they happen as well, even though not technically by an AI.
I am not sure about your first argument though. Of course, the programmer might make some mistakes while programming but then the AI is on itself an “learns” from data provided to it. Based on that data it offers advice on which inmates are more likely to commit another crime and which are not.
However, I share your scepticism concerning the use of AI in this area. By implementing this technology, a government basically gives the power of deciding over a person’s future to a program whose results it can neither explain nor understand as it is based on vast amounts of small data points. In my opinion, it should rather focus on eliminating the reasons for crimes by providing goals or opportunities for the inmates after release when they need to be reconnected with society. inmates when they need to be reconnected with society.