When Chatbots Listen: Bridging the Gap or Widening It?

18

September

2025

No ratings yet.

I recently read an article in The Guardian about the rising use of these chatbots and the concerns therapists are voicing. It made me think about my own surroundings. Some of my friends actually enjoy using AI tools. For them, opening an app feels less intimidating than calling a clinic, and it is often much faster. With long waiting lists and high costs for sessions, AI feels like an accessible alternative.

Artificial Intelligence is rapidly becoming a key player in mental health care. AI promises to make support more accessible and personalised. It can pick up subtle signals and flag early signs of mental illness before symptoms become severe. AI can even create tailored treatment plans and adjust therapy dynamically as patients progress, reducing the frustrating trial-and-error process.

These innovations bring clear benefits. They can shorten waiting times, make care more scalable, and offer a low-threshold way for people to seek help without immediately committing to therapy sessions. For many, an AI chatbot can be a first step, a safe, stigma-free place to express feelings before speaking to a professional.

But there are real reasons to be cautious. AI can analyse data and spot patterns with impressive precision, but it cannot feel empathy or fully understand context. A trained therapist can notice subtle emotional shifts and offer the human connection that a chatbot simply cannot. Without this, care risks becoming too standardised and detached.

Another concern is privacy. Sharing intimate thoughts with an app means trusting that your data is stored securely and that it is used in an ethical way. Unfortunately, this is not always the case. Research has shown that some mental health apps share data with third parties or use it for targeted advertising. Users often do not realise that this is the case. This makes it even more important to demand transparency and strict data protection if AI is to play a safe role in mental health care.

I believe the future of AI for mental health is hybrid. AI can function as bridge, providing rapid support, with therapists doing more in-depth emotional work. Together, they can make mental health treatment more accessible than ever. 

Do you think that AI chatbots should play a bigger role in mental health care, or do we risk replacing too much of the human connection?

References

Hall, R. (2025, August 30). ‘Sliding into an abyss’: Experts warn over rising use of AI for mental health support – Therapists say they are seeing negative impacts of people increasingly turning to AI chatbots for help. The Guardian. https://www.theguardian.com/

Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22(1), 122. https://doi.org/10.1186/s12910-021-00687-3

Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., & Eberhardt, J. (2024). Enhancing mental health with Artificial Intelligence: Current trends and future prospects. Journal of Medicine, Surgery, and Public Health, 3, 100099. https://doi.org/10.1016/j.glmedi.2024.100099

Thakkar, A., Gupta, A., & De Sousa, A. (2024). Artificial intelligence in positive mental health: A narrative review. Frontiers in Digital Health, 6, 1280235. https://doi.org/10.3389/fdgth.2024.1280235

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *