Mental health struggles due to AI – a view from ChatGPT

22

October

2023

No ratings yet.

Next to the information overload we face due to the internet, social media and in general from our phones, I was wondering what other impacts AI might have on our mental health. In my other blog I already touched these topics, but this time I asked ChatGPT for more precise future issues in relationship with mental health. I wanted to see if Gen AI is able to perhaps predict some issues that will arise due to itself.

It is a reflection exercise with an algorithm that is trained to predict words.

But can it come up with own labels for completely not discovered mental health conditions?

Here are some of the answers and my thoughts to those:

Mental health conditions are typically the result of complex interactions between biological, psychological, social, and environmental factors. However, it is possible for AI-related factors to exacerbate or contribute to certain mental health issues. Here are some hypothetical conditions that could be associated with AI and how they might be named:

AI-Induced Technostress (AIT): This condition could describe the stress and anxiety individuals experience due to constant exposure to and reliance on AI-driven technologies. Symptoms may include information overload and fear of technological advancements.

-> Imagining a world with IoT devices at home or in the whole city that have AI integrated, might cause a strange sensation and indeed stress. Think here of speaking devices such as a TVs, microwaves and phones. Are you truly alone at home? Or is someone listening?

Algorithmic Anxiety (AA): Algorithmic anxiety might be used to describe the anxiety and worry individuals feel about the potential biases, discrimination, and unpredictability of AI algorithms. This condition could manifest as a heightened sense of distrust and paranoia related to AI systems.

–> Here I had to think about the employee selection processes, such as at Amazon, where the AI was biased in selecting men. These fears of biases in the algorithms would therefore be justified. Also misinformation in the texts AI is producing is a common feature. Understanding what reality is and what not might be part of the anxiety it could cause.

Digital Dependence Disorder (DDD): DDD could be used to refer to the addiction-like behaviors people develop as they become increasingly dependent on AI-driven devices and applications. Symptoms may include withdrawal symptoms when disconnected from technology and neglect of real-world relationships.

–>Perhaps AI will indeed make us stick to our phones even more. Increasing dependency to the online world and decreasing our touch with ‘the real world’. A scary reality that has been thematised in several movies such as “Her”, where a writer falls in love with an AI system.

The idea that an AI could substitute real human connections, is another topic of itself. I am not sure what the net value of something like this would be, considering on the one side the fact that loneliness is on its rise especially coming with age as well as on the other side shrinking friends groups.

Autonomous Anxieties (AA): Autonomous anxieties could describe the anxiety that arises when individuals fear the loss of control or autonomy to AI systems. This condition might involve a heightened sense of helplessness and vulnerability in an increasingly automated world.

–> Here I had to think of future devices like house locks that are activated by voice or face recognition. What if one loses their voice or had a face injury? Will the AI grant us access to our houses? Perhaps not. Self-driving cars come even closer to this reality, as they are already being tested. What if we do not like a decision made by AI? What if AI caused somehow an accident? We could become so dependent on AI that we can not change some of its outcomes easily.

These examples above are just fictional! But the descriptions seem sound in themselves.

It’s important to note that the naming of such conditions is purely hypothetical, and there is no established medical consensus or diagnostic criteria for these specific AI-related mental health issues.

Mental health professionals and researchers would need to conduct thorough studies and assessments to determine whether such conditions exist and warrant formal recognition and diagnosis.

I found it interesting to see what Chat GPT could come up with. I hope, for all of us, that the mentioned issues above do not affect society at large.

May we all stay educated and on top of AI developments 😉

Please rate this

AI and mental health struggles

2

October

2023

No ratings yet.

I once watched a video about someone asking an AI chatbot about mental (health) struggles.
It was quite astonishing to me what the bot came up itself.

Our mental capability is what distinguishes us from animals. My art teacher once said, because we use our brain so much and in different ways (e.g. creativity, logical reasoning, languages…etc) we might have here and there people that have very extreme forms of how the brain works.

“It is the price we pay for brilliance” – an interesting perspective of my art teacher. She was very into psychology as well 😉

Therefore, I asked ChatGPT a similar question:

What are AI caused mental health struggles in humans?

ChatGPT and Bard came up with various mental health struggles:


1. **Job Displacement Anxiety**: As AI and automation continue to replace certain jobs, individuals who lose their jobs may experience anxiety, depression, and stress related to financial insecurity and the uncertainty of finding new employment.

This is sth indeed I hear here and there, also from students. Due to the economic downturn, it seems like competition in the job market becomes fiercer. Adding an AI that could cut costs significantly and doing the easier repetitive tasks, means oneself has to be more skilled or sophisticated. Therefore, it sth my peers and I ponder about the future.


2. **Social Isolation**: Increased reliance on AI-driven technologies for communication and entertainment, such as social media and virtual reality, can contribute to social isolation and loneliness. These feelings of isolation can lead to mental health issues like depression and anxiety.

The issue of isolation, came up in times like Corona. People spending a lot of time on Netflix, Instagram, TikTok etc – lost real connection with people around them. Of course, communicating with people is being enabled through applications, but on the other side the connections are weaker and not that deep (or we need to learn how to make meaningful connections using these applications). When it comes to generative AI, these bots could become for some people the comfort they don’t find, as AI will give them what they want to hear. Distancing them from real people, and fleeing into the digital space.


3. **Information Overload**: AI algorithms curate and deliver vast amounts of information to users, which can lead to information overload. Trying to keep up with this constant flow of data can cause stress, impacting mental well-being.

With many texts, images and videos nowadays – it became increasingly difficult to distinguish what is real and what is fake. In addition, more and more people have access to the internet, posting their ideas and opinions – spreading those sometimes with AI generated texts that save them time and create mass posts.
Information flows from everywhere and could cause stress.



4. **Depersonalization**: Interactions with AI-driven customer service chatbots and automated systems can lead to feelings of depersonalization and frustration, especially when individuals prefer human interactions. This can negatively impact emotional well-being.

Probably, a familiar issue if one can not solve an issue with a chatbot and tries to get to a real human on the other side. The chatbot just repeats the same questions and if not given the option to talk to a human, one simply gives up and tries calling or via email.



In summary, AI brings a lot of benefits though needs quite a lot of training on how to handle people more naturally and people need to learn where the boundaries are with AI.

Please rate this