After reading several blog posts on AI tools and its implications for the health sector, specifically the mental health sector, I was intrigued. In my opinion, there is a thin line between doing good or harm when leaving these type issues into the hands of technology and experimentation. Do we actually think (generative) AI tools can replace humans in all fields? Do we even really want it to?
Generative AI is used more frequently to revolutionize traditional therapy, such as chatbots or diagnosis tools. Counselling sessions are revised, more efficient diagnoses are made and treatment plans are designed all assisted by generative AI tools (Roth, 2023). These advances have definitely played a huge part in breaking the stigma around therapy, even making it more accessible to those without (Leamey, 2023). One of the latest innovations is an AI that monitors a patient’s mental health over time, detecting subtle changes in speech or text that could suggest symptoms of mental health disorders worsening (Roth, 2023).
While simultaneously admitting technology is the future in many aspects, its applications to mental health should not be rushed. People need care, empathy and nuances, and generative AI is just not quite there yet (Leamey, 2023). Leaving mental health, that comes from our unique human minds, to some, maybe advanced, algorithm seems to have dangerous implications.
AI pulls data from many sources, but these sources do not get verified before. Research has even shown that eating disorders were promoted by generative AI chatbots, putting in certain prompts like “anorexia inspiration” and the AI returning toxic images or diet plans (Leamey, 2023). Source verification, consent procedures, and obviously privacy issues come to rise when thinking of using AI tools for mental health.
Asking ChatGPT for mental health tips, it provides me with a basic list of activities that I should do in order to improve my mental state, such as self-care, sports, journaling, but on the top of the list it states talking to someone and seeking professional help. When asking the tool for the most important step of the list, ChatGPT tells me to reach out to a professional and reach out for support. Even when asking it “what if I can’t” it lists many other ways to seek help through help lines, trusted persons, or online support.
While AI tools can definitely aid in the mental health industry in the future, it is far from ready to apply wide-scale already, and its implications should be carefully considered. I find it interesting that ChatGPT recognizes the need for human interaction, empathy and nuance. How far along do you think AI tools should be for them to be used in sensitive practices like this?
References
Roth, E. (2023). Revolutionizing Mental Health: Generative AI in Therapy. Revolutionizing Mental Health: Generative AI In Therapy. https://www.productiveedge.com/blog/revolutionizing-mental-health-generative-ai-and-therapy
Leamey, T. (2023). Popular AI tools can hurt your mental health, new study finds. CNET. https://www.cnet.com/health/mental/popular-ai-tools-can-hurt-your-mental-health-new-study-finds/
It is a very interesting topic that affects everyone. What strikes me most is that ChatGPT knows that it is not always reliable and therefore advises you to see a doctor. I doubt whether an AI can actually take over a large part of the (mental) health care. As you say yourself, there are many nuances as each person is unique. It will undoubtedly be further developed in ways that we do not yet think is possible. I’m curious to see what the future brings on this specific topic.
Hi Kelsey,
I was intrigued by the title of your blog! I believe it is important not to rush the use of generative AI for helping people with mental health. To be honest, while generative AI tools are used for many different objectives and in different settings, I believe that mental health problems are too vulnerable to let a generative AI tool get involved.
Besides, for example ChatGPT’s security is questionnable, meaning that you should not discuss any confidential problems with the tool. I am wondering if you should share these kinds of information.
However, I really liked your posts and it got me thinking 🙂
I find your post very interesting as mental health topic has been increasingly important in recent years. While AI has been proven to help with various medical diagnoses, I think being in charge of therapy is excessive as the human touch and empathy will be missing, yet they are central to therapy sessions and building connections with patients. Additionally, AI promoting certain unhealthy behaviours poses a great risk on fragile topics such as mental health. Hence, I believe AI can be a useful tool in the mental health area, but it should be only a tool, not a substitute for psychiatric or psychologist.
Hey Kelsey, thank you for the blog post I think it’s a super interesting topic, I never really thought about using AI for mental health. I enjoyed that you mentioned the limitations and it made me think. There are already many issues in therapy especially when it comes to the topics of intersectionality. Many therapists do not take into account already preexisting issues of systemic racism or homophobia that sometimes is inherent in social systems. So I’m wondering, could a rational AI that is pre-trained on unbiased and comprehensive data on such issues actually replace a therapist that, to my belief, will always have some sort of bias or difficulties relating to someone who is not in the socioeconomic class?
Hey Kelsey,
thanks for your blog post – I enjoyed reading it! It is a very interesting topic which has probably sparked quite some discussion already among therapists and as you said – I agree that there is a thin line between doing good and doing harm.
From what you have written, I agree with the advantages you have mentioned, such as AI improving the efficiency of diagnoses and treatment and I believe that, at the speed AI is developing, we can still expect a lot of innovations in this area. However, I agree also with Eliza’s point that while technology is advancing, the human touch in mental healthcare is irreplaceable, as empathy, understanding and the overall human experience is invaluable. Also, I believe that applying AI in mental healthcare – as it is a very sensitive topic for most – probably poses quite some privacy concerns and raises ethical considerations. Additionally, regardless of how advanced the AI might me, I think there is always the risk of misinformation, misdiagnosis or inappropriate responses which can have negative consequences for someone’s mental health.