Artificial Intelligence in Policing

13

October

2022

No ratings yet.

In 2021, a leading firm in driving the inclusion of AI in policing in the United States, called PredPol (for predictive policing), came under heavy scrutiny. The scrutiny arose from investigations into the software they were using disproportionately predicts crimes in lower-class neighbourhoods where inhabitants are usually working-class, people of colour (and black people in particular) (Guariglia, 2022).

The aim of PredPol is to assist police in distributing manpower across cities and neighbourhoods, by predicting where crime will occur, police organizations can then see these areas prior to any crime actually believed to occur. They can then station manpower there to swiftly act or hope their presence deters any actual crime from occurring. How did PredPol predictions work? They use a machine-learning algorithm which is trained using historical event datasets per city, usually dating back 2-5 years. The data collected does not include any demographic or personal information about neighbourhoods or people, the three data points collected include crime type, crime location, and crime date/time (PredPol, 2022). They claim this eliminated the possibility of bias in sending police to discriminated neighbourhoods. Although the category of data collected does not show it, this inputted data (i.e., training data), did actually strengthen already a present bias in policing. Police have been known to unfairly police neighbourhoods inhabited by working-class Americans and specifically African Americans. This bias reaches so deep into the system that police administrative records lead to many misunderstandings of the level of bias present in policing within the US (Peeples, 2021). Now, remember, PredPol uses these records to extract their training data. Therefore, they are seeing a negative feedback loop created whereby the historically harmful and biased views are continuously fed into the AI foundation, teaching it not to do the job better, but to instead do the same job at a faster rate. And given the machine learning characteristic, the algorithm continued to unfairly send police to discriminated neighbourhoods and then store this data to re-affirm its learnings.

The moral of the story here is despite a good objective, the data used cannot be valued at quantity over quality. Moreover, designing software in areas prone to bias such as policing it is essential that extra care and consideration is taken in what data is fed into the algorithm.

References

Guariglia, M. (2022). Police Use of Artificial Intelligence: 2021 in Review. Retrieved 11 October 2022, from https://www.eff.org/deeplinks/2021/12/police-use-artificial-intelligence-2021-review

How PredPol Works | Predictive Policing. (2022). Retrieved 11 October 2022, from https://www.predpol.com/how-predictive-policing-works/

Jany, L. (2022). Researchers use AI to predict crime, biased policing in major U.S. cities like L.A. Retrieved 11 October 2022, from https://www.latimes.com/california/story/2022-07-04/researchers-use-ai-to-predict-crime-biased-policing

Peeples, L. (2022). What the data say about police brutality and racial bias — and which reforms might work. Retrieved 11 October 2022, from https://www.nature.com/articles/d41586-020-01846-z

Please rate this

2 thoughts on “Artificial Intelligence in Policing”

  1. A really important piece on the balance of ethics and the functionality of an AI system. In my opninion you could have focused a bit more on the actual bias of the data and if there was any. A good discussion may arise if we want these systems in place where the risk of unethical discrimination is present. Overall an interesting discussing and one we will definitely hear more from in the future.

  2. Thank you for your post.The example you have used shows why this is such an important topic. Furthermore, it clearly shows that potential bias can emerge at any stage of data collection. And that in some industries/fields of work human actions continue to play a major role in making strategic decisions. It seems to me to be a challenging task in this case to create a benchmark that is not statistically biased at all. I did see that Knox, D., Lowe, W. & Mummolo, J. (2019) suggesting a bias-correction procedure in their article. It will be interesting to see how this will develop. I think the value of the algorithm is limited for now.
    Knox, D., Lowe, W. & Mummolo, J. Am. Polit. Sci. Rev. https://doi.org/10.1017/S0003055420000039 (2020).

Leave a Reply

Your email address will not be published. Required fields are marked *