Mental health struggles due to AI – a view from ChatGPT

22

October

2023

No ratings yet.

Next to the information overload we face due to the internet, social media and in general from our phones, I was wondering what other impacts AI might have on our mental health. In my other blog I already touched these topics, but this time I asked ChatGPT for more precise future issues in relationship with mental health. I wanted to see if Gen AI is able to perhaps predict some issues that will arise due to itself.

It is a reflection exercise with an algorithm that is trained to predict words.

But can it come up with own labels for completely not discovered mental health conditions?

Here are some of the answers and my thoughts to those:

Mental health conditions are typically the result of complex interactions between biological, psychological, social, and environmental factors. However, it is possible for AI-related factors to exacerbate or contribute to certain mental health issues. Here are some hypothetical conditions that could be associated with AI and how they might be named:

AI-Induced Technostress (AIT): This condition could describe the stress and anxiety individuals experience due to constant exposure to and reliance on AI-driven technologies. Symptoms may include information overload and fear of technological advancements.

-> Imagining a world with IoT devices at home or in the whole city that have AI integrated, might cause a strange sensation and indeed stress. Think here of speaking devices such as a TVs, microwaves and phones. Are you truly alone at home? Or is someone listening?

Algorithmic Anxiety (AA): Algorithmic anxiety might be used to describe the anxiety and worry individuals feel about the potential biases, discrimination, and unpredictability of AI algorithms. This condition could manifest as a heightened sense of distrust and paranoia related to AI systems.

–> Here I had to think about the employee selection processes, such as at Amazon, where the AI was biased in selecting men. These fears of biases in the algorithms would therefore be justified. Also misinformation in the texts AI is producing is a common feature. Understanding what reality is and what not might be part of the anxiety it could cause.

Digital Dependence Disorder (DDD): DDD could be used to refer to the addiction-like behaviors people develop as they become increasingly dependent on AI-driven devices and applications. Symptoms may include withdrawal symptoms when disconnected from technology and neglect of real-world relationships.

–>Perhaps AI will indeed make us stick to our phones even more. Increasing dependency to the online world and decreasing our touch with ‘the real world’. A scary reality that has been thematised in several movies such as “Her”, where a writer falls in love with an AI system.

The idea that an AI could substitute real human connections, is another topic of itself. I am not sure what the net value of something like this would be, considering on the one side the fact that loneliness is on its rise especially coming with age as well as on the other side shrinking friends groups.

Autonomous Anxieties (AA): Autonomous anxieties could describe the anxiety that arises when individuals fear the loss of control or autonomy to AI systems. This condition might involve a heightened sense of helplessness and vulnerability in an increasingly automated world.

–> Here I had to think of future devices like house locks that are activated by voice or face recognition. What if one loses their voice or had a face injury? Will the AI grant us access to our houses? Perhaps not. Self-driving cars come even closer to this reality, as they are already being tested. What if we do not like a decision made by AI? What if AI caused somehow an accident? We could become so dependent on AI that we can not change some of its outcomes easily.

These examples above are just fictional! But the descriptions seem sound in themselves.

It’s important to note that the naming of such conditions is purely hypothetical, and there is no established medical consensus or diagnostic criteria for these specific AI-related mental health issues.

Mental health professionals and researchers would need to conduct thorough studies and assessments to determine whether such conditions exist and warrant formal recognition and diagnosis.

I found it interesting to see what Chat GPT could come up with. I hope, for all of us, that the mentioned issues above do not affect society at large.

May we all stay educated and on top of AI developments 😉

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *