When Technology Meets Emotions: How AI Helps Us Heal, Feel, and Remember – My Experience

3

October

2025

No ratings yet.

I remember the first time I heard about ChatGPT. It was during a professional development class in my bachelor’s program. Our tutor asked if we knew what ChatGPT was. Most of the class already did, but I didn’t. She then explained the concept behind it and encouraged us to register.

At the time, my first thought was: I don’t need it. Why would I need this if I can write things myself and search everything on Google? Little did I know it would completely change not only my academic journey, but also my life.

Today, I can’t imagine a day without ChatGPT, and to be honest, it’s scary.

I use it for everything:

  • I need to research something for my classes? ChatGPT is my starting point.
  • I want to study a new language? ChatGPT becomes my tutor.
  • I want to cook something with the ingredients I already have at home? I just ask ChatGPT for a recipe.
  • I am on a holiday and see a statue but I’m curious about its story? I send a picture to ChatGPT and get the history instantly.
  • I am having personal problems or need comforting words? I use ChatGPT as my therapist (the speaking version does a great job).
  • Some simple, basic questions randomly pop up in my mind? I just ask ChatGPT.

It has reached a point where sometimes I don’t even want to use my own thinking, because it feels easier to just ask ChatGPT. This makes me wonder: could this be considered a kind of addiction? Is it dangerous? And if so, how can we protect ourselves from the potential negative consequences? I also experienced some downsides: sometimes it makes information up. Moreover, it collects my private information and my questions are used to train the model further (Tyler, 2025) which is even more scary. But still, I feel the advantages outweigh the disadvantages.

Of course, ChatGPT is just one example of what I can use AI for. The possibilities go far beyond that. Recently, I came across a Facebook group* called “We edit photos – bring them to life, restore them, colorize them”, where people post their photos and ask others to edit them. Usually the requests are simple like changing the colour of a dress or changing a background. Then of course responses often turn into funny jokes. But other posts are deeply emotional. Some users share photos of parents, siblings, friends or grandparents who have passed away and ask the community to use AI to bring the picture to life. The results are astonishing: from a photo, AI can generate a moving video of a person smiling, blinking or turning their head. One post that particularly struck me was from a mother whose son was born without a leg. She asked the group to animate his picture so he could see himself with both legs. The videos were touching, and the gratitude in her response was unforgettable.

These posts always make me emotional, and eventually, they inspired me to try it myself. I used an AI-powered tool on a platform called Artlist, which offers an Image-to-Video feature. It lets you upload a photo, add a text prompt (or use an automatically generated one), select the duration and then generate a video with motion (Artlist, n.d.). I decided to use it with a photo of my grandpa, who passed away two years ago. I had no videos of him while he was alive and now, thanks to this tool, I do. That video has become one of my most treasured possessions.

Although I recognize that AI comes with real risks, moments like these remind me that the opportunities it creates can be even more meaningful.

References:

Artlist. (n.d.). Transform your ideas into stunning visuals. Artlist. https://artlist.io/image-to-video-ai

Tyler, D. (2025, July 10).  Myth vs. Fact: “My Conversations with ChatGPT are Private”. Linkedin. https://www.linkedin.com/pulse/myth-vs-fact-my-conversations-chatgpt-private-dontae-tyler-bzase

*The original name of the Facebook group is „PRZERABIAMY ZDJĘCIA – OŻYWIAMY, ODNAWIAMY, KOLORUJEMY”

Please rate this

When Cars Drive Themselves: Technology, Trust and Network Effects

19

September

2025

No ratings yet.

During my trip to the West Coast of USA this July, I saw self-driving taxis for the first time. The number of them driving in San Francisco was astonishing. On one hand, I was very curious and would have liked to take a ride, but on the other hand, I know I would feel uncomfortable knowing the car was driving by itself. The thought of not knowing how it would behave in the event of an accident made me really uneasy. Also, who would be liable if an accident occurred? And what about potential software errors?

Although interest in autonomous vehicles (AVs) is rising among companies such as Tesla and Waymo (the one I saw in San Francisco), a recent Financial Times article reports that David Li, co-founder of Hesai (the world’s largest maker of sensors for self-driving cars) is conservative about the pace of scaling up fully autonomous vehicles (Financial Times, 2025). On the other hand, research suggests that AVs could significantly reduce accidents compared to normal cars: the WHO found that over 90% of traffic crashes worldwide are caused by human error while a study by IIHS estimated that autonomous vehicles could prevent around 33% of crashes if they just eliminated errors like reacting too late (SharpDrive, 2025).

In the article, Li said that although approximately one million people are killed every year in car accidents, if AV kills just one person a year, that’s just one-millionth of the difference but it could destroy a company’s reputation and make survival really difficult. Personally, I think that since AVs have been shown to be the safer option, research and adoption shouldn’t be slowed down. However, Li’s point is very valid: society tolerates millions of human-caused deaths because they are considered “normal,” but a single AV-caused death is highly visible and can completely destroy trust in the company despite being the “safer” option. My question for discussion is: what do you think about this?

Waymo provides a robotaxi service through its app in cities like Phoenix, San Francisco and Los Angeles, allowing users to hail a self-driving vehicle for rides without a human driver (Waymo, n.d.). I think it’s a great example of technological disruption. First, traditional taxis were threatened by Uber and similar ride-hailing services. Now, Uber itself faces potential disruption from Waymo. This also links nicely to the content from our lectures. In terms of network effects, Uber demonstrates a classic direct network effect: more drivers lead to more riders being served and more riders attract more drivers. However, drivers are human and limited by exhaustion and availability. In the case of Waymo, the network effect is different (indirect). The value grows as complementary products (data and AI) are adopted: more vehicles generate more data, which improves AI, leading to safer and more efficient rides, which attract more customers, which generates more revenue, allowing more AVs, which in turn produces more data and the loop goes on.

Have you ever used a self-driving taxi? If so, how was your experience? If not, would you try? Also, who in your opinion should be held responsible in the event of an AV accident? The manufacturer? Software developer?

References:

Financial Times. (2025). Top sensor maker Hesai warns world not ready for fully driverless cars. https://www.ft.com/content/1cea9526-17a8-4554-a660-1c1e6d69676b.

SharpDrive. (2025). Are Self-Driving cars safer than human drivers? https://www.sharpdrive.co/post/self-driving-cars-vs-human-drivers-safety#:~:text=According%20to%20data%20from%20the,and%20it%27s%20not%20always%20safer

Waymo. (n.d.). The World’s Most Experienced Driver. https://waymo.com

Please rate this