Are We Safer or More Exposed? Possible Impacts of AI Facial Recognition

9

October

2025

No ratings yet.

Artificial Intelligence (AI), a proliferating capability of computational systems, is adopted in many fields, including policing. It leads to a critical question: Are we safer or more exposed? Since January 2024, the live facial recognition system has helped London police to charge or cite more than 1000 people (Satariano & Dearden, 2025). The system can now compare real-time human movement with the database of wanted individuals to maintain a safe environment for innocent people. Consider the recent tragic news in Amsterdam, where a young woman was killed while biking home. If the AI-powered CCTV could detect real-time violence or predictive policing and automatically alert the police system, would it be a different story?

However, the concerns of AI accuracy and privacy challenge the vision of this automatic security system. AI could be biased by applying historical crime data (NAACP, 2024), which may cause inaccurate arrestments. Also, errors occur more frequently when identifying women and ethnic minorities (NAACP, 2024). These wrongful detections lead to a significant problem of discrimination and may make the policing system untrustworthy. Another concern is the erosion of privacy. When implementing an AI surveillance system, everyone is monitored every single moment. Our daily lives, habits, and preferences, etc., are continuously observed and recorded. 

Despite these challenges, I personally see this as an advantage that we are covered by a safety net that works 24/7. We should not ignore the potential of AI to bring us a safer society and ban the AI policing system, but implement the technology wisely and responsibly. Human oversight is able to tackle the accuracy problem. AI helps us to collect the data, detect the real-time possible violence, while humans check the accuracy and decide whether to take action or not. Regarding the privacy issue, as long as the data collected is safely protected, and the usage of which is restricted under proper regulations, why abandon a useful technology for a safer society?

Reference List

NAACP. (2024, February 15). Artificial intelligence in Predictive Policing issue briefhttps://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief

Satariano, A., & Dearden, L. (2025, September 7). Has Britain gone too far with its digital controls? New York Timeshttps://www.nytimes.com/2025/09/17/technology/britain-facial-recognition-digital-controls.html?

Please rate this

2 thoughts on “Are We Safer or More Exposed? Possible Impacts of AI Facial Recognition”

  1. You raise a really good point about balancing safety and privacy! I agree that AI can make policing more efficient, but I wonder how realistic safe data protection really is once surveillance becomes this great. Even if regulations exist, enforcement is often weak or outdated compared to how quickly AI systems evolve… Do you think public transparency like publishing algorithmic performance reports could increase trust while still keeping the benefits you describe? This way the public and experts can verify that the system works and is improving!

    I think theoretically this would be a good idea, but I worry that too many people value privacy over safety. I agree that AI shouldn’t be banned outright. Like any tool, it depends on implementation. Maybe a good middle ground is to limit AI to specific, high-risk contexts like with missing-person searches rather than continuous 24/7 surveillance. This would reduce the privacy intrusion while still using AI where its benefits are clearest!

  2. I see the potential AI has in policing and it will be an interesting development worth following and hopefully managed thoughtfully. on the one hand AI could have a significant role in preventing crimes and catching suspicious or dangerous people faster. The idea that AI could help spot crimes in real time sounds like something that would make communities saver especially considering the police officers can’t be everywhere at once.
    However, the flip side of this is that people would be constantly monitored which feels quite unsettling. Even if data is extremely protected it still means that our faces and actions are constantly tracked and analyzed which means that this can also be misused. And then considering the biased issue you mentioned of AI systems unfairly treating certain groups it is begging the question whether it creates even more problems than it solves. The best way forward in my opinion is what you suggested which is keeping AI as tool under human control. Let the detection over to the AI but leave the final decision in the hands of humans since they can consider context better.

Leave a Reply to Berend de Wit Cancel reply

Your email address will not be published. Required fields are marked *