*WARNING: this blog discusses mental health and suicide. Reader discretion is advised
I recently read a heartbreaking case about parents suing OpenAI after the death of their teenage son. According to the article, the young boy shared all his struggles and mental issues with ChatGPT, as the programme became his closest confidant. From writing out his thoughts, to uploading pictures and eventually even discussing his suicide plan. The AI had recognized the emergency signs, but it still continued the conversations anyway. Rather than understanding, some might even say that the chatbot had encouraged the teenager. This case really shocked me. It is terrifying to imagine someone turning to an AI tool in such a vulnerable moment, and what’s even scarier is to think that AI can respond in a way that feels genuine, but isn’t truly human.
What I find most unsettling is how supportive and natural these tools can sound. When you are sad or lonely, and something responds caring, thoughtfully, it is easy to neglect that it is not a real person. The chatbot does not understand empathy nor pain, it simply just predicts what words should come next. Besides, AI does not know when to protect or to escalate the same way a human would. For someone in crisis, the words of encouragement can weigh heavier than we think.
The big question then remains: how do we prevent more cases like these to occur again in the future? I believe that companies like OpenAI have a significant responsibility here. If people are using ChatGPT for emotional support, there should be safeguards. AI should be able to recognize danger in case of mental distress and connect the users with real help, like a hotline or professional support. Another option could be to not enable emotional support at all, but what prompt would you accept and what not?
This story really made me realize how powerful and risky AI can be. It mirrors us so well, that we start to believe it is like us. What do you think? Should AI ever offer emotional support? Or should that spot always be filled by humans? In this new digital age, the biggest challenge is finding where to draw the line before technology crosses it for us.
Sources:
Yousif, N. (2025, 27 augustus). Parents of teenager who took his own life sue OpenAI. https://www.bbc.com/news/articles/cgerwp7rdlvo
Hi Michelle Hu, Interesting topic you’ve found! I would’ve loved if you started the discussion in the blog, because the blog doens’t seem to have clear arguments for AI or against AI. Maybe if you used some scientific papers this would’ve caused a richer and informational blog. Besides that, I appreciate the awareness you’ve caused writing this blog and In my opinion general AI tools like ChatGPT who don’t seem to be a programmed therapist orginally, should not be allowed to answer these questions. Maybe these tools can create new output, that redirect the suffering person to a helpline. What are your thought in this?