When AI Makes Reality Look Too Real

8

October

2025

No ratings yet.

Figure 1: 21 years old vs. 80 years old (Homiwork, 2025)

Nowadays, modern AI tools have the ability create human-like images that look incredibly real. With just a few clicks, you can see how you might look at 80 years old or even generate a picture of you and your favorite celebrity ‘together’. At first, it feels crazy that technology is able to do this and that it came this far, while it seems very normal to Gen Z. To be honest, the more I explore it, the more I realize it’s both exciting and strange how AI can blur the line between what’s real and what’s fake. 

AI image generators use models that are trained on massive datasets of human faces to create realistic images (Fotor, z.d.). They’ve become popular because they tap into something deeply human, our curiosity about ourselves and others. We love seeing ‘what ifs’: What if I had a beard? What if my partner and I were older? What if we had kids together? In a way, these images are a new form of digital storytelling. They let us play with identity, fantasy, and imagination in a way that photography never could.

But it also makes me think about how personal and emotional these AI-generated images can become. When you see a picture that looks exactly like you with someone you were never actually with, it can make you feel things that aren’t even real. It’s a reminder that AI isn’t just about logic, it can also touch our emotions, sometimes in unexpected ways.

Rather than seeing this as purely dangerous, I see it as a creative opportunity with responsibility. Artists are already using these tools to explore ideas of memory, time, and relationships, creating portraits that show how technology can expand what it means to be human. At the same time, we should remember that with every powerful tool comes a need for awareness. AI doesn’t just reflect reality, it can reimagine it.

So maybe the real challenge isn’t whether these tools are ‘good’ or ‘bad,’ but how we use them. If we combine creativity with mindfulness, AI-generated images can become more than tricks of the eye, they can be windows into our imagination.

Fotor. (z.d.). AI Face Generator: Create unique human faces using AI | Fotorhttps://www.fotor.com/features/ai-face-generator/

Homework (2024). AI Age Progression: How Will You Look at 18 or Older. https://homiwork.com/en/app/pictures/create/age-transformation

Please rate this

When Algorithms Backfire: The Dark Side of Network Effects

18

September

2025

5/5 (1)

Hoog tijd om de zwaktes van kunstmatige intelligentie onder ogen te zien |  de Volkskrant

The concept of network effects is easy to describe: the more people use a service, the more valuable it becomes (Banton, 2024). This is why they play a significant role in the growth of products and services. What most people don’t realize is that these algorithms play a dominant role in determining what people get to see on their screens. Content doesn’t ‘just appear’ on screens.

Algorithms are designed to push popular or highly engaging content onto more feeds and ensure visibility. While this mechanism helps platforms grow, it also carries a darker side. The danger lies in the fact that algorithms don’t distinguish between positive and negative content. Engagement-driven algorithms prioritize strong emotional reaction above objective news articles. This means that fake news or misinformation spreads faster than accurate information (Vosoughi et al.,2018). As awareness of misinformation grows, public trust in platforms declines. Instead of reinforcing network effects, this process creates negative network effects, where the products or services become less valuable as more people use it.  Feed algorithms classify user preferences by collecting behavioral data and matches users with precise and continuous information. This creates powerful driving forces for group polarization, which highly contributes to the formation of echo chambers. Thus, echo chambers are fueled by algorithms. Recent studies show that these echo chambers can promote the spread of misleading information, fake news and rumors (Gao et al.,). For example, Youtube’s recommendation system has been criticized for pushing viewers toward increasingly radical content. The algorithms seem to consistently recommend more extreme versions of things users are watching, just like if you start watching running videos, you end up with videos about ultramarathons. 

Algorithms have a significant impact on the formation of negative network effects by prioritizing engagement-driven data over objective data, the risks of algorithms can cause a harmful loop that damages trust and spreads misinformation.


References

Banton, C. (2024, 22 augustus). What Is the Network Effect? Investopedia. https://www.investopedia.com/terms/n/network-effect.asp

Gao, Y., Liu, F., & Gao, L. (2023). Echo chamber effects on short video platforms. Scientific Reports13(1). https://doi.org/10.1038/s41598-023-33370-1

Tufekci, Z. (2018, 10 maart). YouTube, the great radicalizer. The New York Times. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Please rate this