How AI changed my FYP and probably yours too

10

October

2025

No ratings yet.


I used to think that I would never use Tiktok, but just like many others: I enjoy a bit of scrolling. Maybe a lot… I open the app, I start scrolling and suddenly an hour has passed. With every scroll and every second that I spend on Tiktok, it only feeds the data to the point that they know me a bit too well. My FYP does not just show what I like, it predicts what I will like next. However, with the current content that has taken over my FYP, I really wonder: Where are we going with all this new AI generated content?

It looked so real. “Did Jake actually do his own makeup?” My exact thoughts when I stumbled across the new Jake Paul videos. As OpenAI updated their Sora app, the AI generated content took my FYP over by storm. Not only were the visuals realistic, the voiceover also sounded exactly like him. The AI content started quite innocently. I didn’t mind watching bears jumping on a trampoline, but now it has evolved to actual human beings who are being modified for content. I can not help but ponder if the current technology is actually good for us. The possibilities with the new AI are endless, making it more and more difficult to draw the line between what’s authentic and synthetic. 

Personally, it makes me feel a bit uneasy that this new content has become so accessible. Besides, I also want to reflect on what we actually see: what value do these videos actually give us? Do we actually get satisfaction out of this new kind of content? I think it is good to stay critical on how this new AI can actually enhance our lives. That being said, I am curious on how this will evolve and whether this will actually change our entertainment. Maybe the scariest part isn’t that AI is changing what we see, but that it is quietly changing what we expect to see.

Please rate this

When AI crosses the line between help and harm

9

October

2025

No ratings yet.

*WARNING: this blog discusses mental health and suicide. Reader discretion is advised

I recently read a heartbreaking case about parents suing OpenAI after the death of their teenage son. According to the article, the young boy shared all his struggles and mental issues with ChatGPT, as the programme became his closest confidant. From writing out his thoughts, to uploading pictures and eventually even discussing his suicide plan. The AI had recognized the emergency signs, but it still continued the conversations anyway. Rather than understanding, some might even say that the chatbot had encouraged the teenager. This case really shocked me. It is terrifying to imagine someone turning to an AI tool in such a vulnerable moment, and what’s even scarier is to think that AI can respond in a way that feels genuine, but isn’t truly human.

What I find most unsettling is how supportive and natural these tools can sound. When you are sad or lonely, and something responds caring, thoughtfully, it is easy to neglect that it is not a real person. The chatbot does not understand empathy nor pain, it simply just predicts what words should come next. Besides, AI does not know when to protect or to escalate the same way a human would. For someone in crisis, the words of encouragement can weigh heavier than we think.

The big question then remains: how do we prevent more cases like these to occur again in the future? I believe that companies like OpenAI have a significant responsibility here. If people are using ChatGPT for emotional support, there should be safeguards. AI should be able to recognize danger in case of mental distress and connect the users with real help, like a hotline or professional support. Another option could be to not enable emotional support at all, but what prompt would you accept and what not?

This story really made me realize how powerful and risky AI can be. It mirrors us so well, that we start to believe it is like us. What do you think? Should AI ever offer emotional support? Or should that spot always be filled by humans? In this new digital age, the biggest challenge is finding where to draw the line before technology crosses it for us.

Sources:

Yousif, N. (2025, 27 augustus). Parents of teenager who took his own life sue OpenAI. https://www.bbc.com/news/articles/cgerwp7rdlvo

Please rate this