Who is Jianwei Xun? My perspective on rethinking philosophy in the age of GenAI

4

October

2025

5/5 (1)

What if the most insightful philosopher of the digital age wasn’t a “who”, but a “what”?

Initially presented as a Hong Kong-born philosopher based in Berlin, Jianwei Xun quickly gained traction in European intellectual circles for his work Hypnocracy: Trump, Musk, and the New Architecture of Reality. The book reports a sharp analysis of how power operates in the digital age not through oppression, but through the stories we consume and believe. Its central concept, “hypnocracy”, describes a new form of manipulation, where power works by shaping our very state of consciousness through algorithms, filter bubbles, and personalised timelines, getting readers into a state of collective trance.

Ironically, the book embodies the very phenomenon it is critical of, being it an AI-generated text about AI-driven manipulation itself. Indeed, its true author is the Italian essayist and publisher Andrea Colamedici. He revealed that Jianwei Xun is a “distributed philosophical entity,” a collaborative construct between human intelligence and artificial intelligence systems. Colamedici used AI platforms, specifically ChatGPT and Claude, not as a ghostwriter, but as an “interlocutor” (El Paìs, 2025). He would present ideas, challenge the AI’s assertions, request deeper analysis, and even pit different AIs against each other in a “fertile conflict”, using a method he terms “ontological engineering” (Le Grand Continent, 2025). He estimates about 40% of the book’s early drafts were AI-generated, which he then curated, merged, and refined.

To solidify the persona and anchor it in the academic digital ecosystem, Colamedici had built a detailed fictional biography, a professional website, and uploaded scholarly publications to Academia.edu. He even created a fictional literary agent, Sarah Horowitz, to handle communications with journalists and publishers. The deception unraveled in April 2025, when the journalist of L’Espresso Sabina Minardi investigated and found that the philosopher was a pseudonym for this human-AI collaboration. Her suspicions arose from “linguistic clues”, a “phraseology that seemed designed to hypnotise”, and the evasiveness of the author (L’Espresso, 2025). As a response, Colamedici clarified its intent: to let the medium become the message. Readers and the media weren’t just reading about a digitally constructed reality; they were participating in one, being “hypnotised” by a coherent and persuasive intellectual voice that had no physical existence.

My Personal Experience: Contextualising the Case as a Mirror for Our Times

Personally, I reckon that Jianwei Xun experiment is a Rosetta Stone for understanding the implications of Generative AI in the reality we are living in. Colamedici insists that Jianwei Xun is not a pseudonym but a “device” and an “emergent form of authorship” (Le Grand Continent, 2025). He resists the idea that Xun is merely the avatar of a literate person using a tool, proposing instead a “third space where human and artificial cognition meet”. What can be defined as a conceptual deepfake, Jianwei Xun was conceived to create a visceral, memorable understanding of a complex philosophical idea.

At this point, we as readers – and consumers – are forced to completely rethink what originality and authority mean. Not only does this represent a fundamental challenge for any content-driven digital strategy, but is also points to a unique strategic opportunity: using AI-generated narratives to build immersive brand stories or educational experiences that resonate on a deeper level than traditional content – this way, moving beyond shallow, throwaway AI-generated content. Xun’s case also landed in a regulatory grey area, running afoul of the European AI Act, which mandates transparency for AI-generated content. Therefore, the (initial) failure of media and institutional gatekeepers to discern the artificial from the human of this product helps underscoring a dual imperative: leveraging AI for innovative engagement while simultaneously building robust AI literacy and validation processes.

To me, Jianwei Xun’s Hypnocracy holds up an uncomfortable mirror. Even though I initially felt uncomfortable due to its deception, I cannot dismiss its brilliance as a performative critique. In a powerful yet dangerous way, it proves that in the age of AI, a compelling narrative, regardless of its origin, can capture attention and influence thought. As already mentioned, our responsibility should be to advocate for clear disclosures when AI is used a collaborator, building trust in an era of synthetic content. On the other hand, we should move from passive consumption to active, critical co-creation, using AI as a “maieutic” interlocutor to nurture and challenge our own thinking – much like Colamedici did, attempting to follow Socrates’ footsteps in the era of digitalisation. Thus, the goal should be to engage with the machine to sharpen our own critical faculties, without getting stuck in an algorithmic echo chamber, but aiming at a more innovative, responsible, and human-centric approach.

Because the era of hypnocracy is here, and we have to navigate it with our eyes wide open.

Sources

Limòn, R. (2025, April 7). Jianwei Xun, the supposed philosopher behind the hypnocracy theory, does not exist and is a product of artificial intelligence. El País (English). https://english.elpais.com/technology/2025-04-07/jianwei-xun-the-supposed-philosopher-behind-the-hypnocracy-theory-does-not-exist-and-is-a-product-of-artificial-intelligence.html

Gressani, G. (2025, April 4). Chi è Jianwei Xun: una conversazione con Jianwei Xun. Le Grand Continent.https://legrandcontinent.eu/it/2025/04/04/chi-e-jianwei-xun-una-conversazione-con-jianwei-xun/

Minardi, S. (2025, April 7). Ipnocrazia: best seller libro – chi è Xun. L’Espresso.https://lespresso.it/c/inchieste/2025/4/7/ipnocrazia-best-seller-libro-chi-e-xun/53621

Carelli, E. (2025, April 3). Ipnocrazia, intelligenza artificiale, scrittura, filosofiaL’Espresso.https://lespresso.it/c/opinioni/2025/4/3/ipnocrazia-intelligenza-artificiale-scrittura-filosofia-lespresso/53598

The New York Times. (2025, April 30). The hypnocracy: AI philosopher book. The New York Times.https://www.nytimes.com/2025/04/30/world/europe/hypnocracy-ai-philosopher-book.html

Tlon (2025). Ipnocrazia. Trump, Musk e la nuova architettura della realtà. Jianwei Xun. https://tlon.it/ipnocrazia/

Please rate this

Netflix’s (seemingly too?) Perfect Recommendation System.

7

September

2024

5/5 (1)

Netflix is widely seen as one of the world’s most successful streaming platforms to date. Many might accredit this success to its broad library of fantastic titles and simple, yet effective, UI. However, behind the scenes a lot more is going on, which keeps users on the platform longer, and most importantly, reduces subscriber churn.

While Netflix has 277 million paid subscribers across 190 countries, no user experience is the same for any of these users. Over time, Netflix has developed its incredibly intelligent Netflix Recommendation Algorithm (NRE) to leverage data science, and create the ultimate personalized experience for every user. I think most of us are aware of some personalization algorithms, but not the extent to which they go!

The NRE is composed of multiple algorithms that filter Netflix’s content based on a user’s profile. These algorithms filter through more than 5000 different titles, divided in clusters, all based on an individual subscriber’s preferences. The NRE works by analyzing a wealth of data, including a user’s viewing history, how long they watch specific titles, and even how often they pause or fast-forward. This, in turn, results in videos with the highest likelihood of being watched by the user, being pushed to the front. Which is, according to Netflix, essential, since the company estimates that they only have around 90 seconds to grab a consumer’s attention. I think, as consumer attention drops even further (with apps like TikTok destroying our attention span), this might become even more of a problem in the future. I mean, who has the time to sit down and watch a whole movie these days??

This also ties into the concept of the Long Tail which we discussed, which refers to offering a wide variety of niche products that can appeal to smaller audience segments. Netflix can now surface lesser-known titles to the right audiences using its recommendations algorithms. While these niche titles might have never been discovered by users in the past, Netflix can now monetize the Long Tail of its Library. You must have definitely noticed that your family or friends have titles on their Homepage that you would never see on your own, and this is the NRE at work.

While this model is largely successful, it might raise concerns around content bias. For example, Netflix’s use of different promotional images for the same content based on a user’s perceived race or preferences has sparked debate. Although the intent is to tailor recommendations more effectively, it risks reinforcing stereotypes and narrowing the scope of content that users are exposed to.

Ultimately, user data is exchanged for a super personalized experience, though this experience can sometimes be flawed. What do you think about Netflix’s NRE and its effects on users? Do you think this data exchange is fine, or would you rather just see the same Homepage as everyone else?

Please rate this

The power of Big Tech companies

9

October

2021

No ratings yet.

How does social media impact our world? The use of social media has highly increased over the years, and you would almost forget how it was without it. But on the 4th of October 2021 people around the world had a glimpse of how a world without social media would look like. A global outage took place and Facebook and its family of apps, including Instagram and Whatsapp were down for more than six hours. More than 3.5 billion people around the world rely on these platforms for communication with friends and families or for running their businesses.

Additionally, Frances Haugen, a former employee of Facebook, revealed that same week how the company is causing harm by providing evidence to lawmakers, regulators and the news media.

The abovementioned outage and the revelations brought to light from whistleblower Frances Huagen not only showed how dependent the world has become on social media but also added fuel to the fire; the ever-growing power of big tech companies and the way those companies deal with harm caused by their platforms.

Companies such as Facebook, Amazon, Google and Apple all provide digital services and those have ingrained in our lives that it is almost impossible to avoid them. Some argue that this succes comes with responsibility and increasingly people are questioning if those companies are living up to this responsibility. Two critical points are: How do Big Tech companies protect the privacy of their users? and to what extent can they be held liable for what is happening on their platform?

According to Haugen companies like Facebook and Instagram use amplification algorithms and engagement-based raking that is leading children and teenagers to harmful online content without trying to solve this issue because of the profit it’s earning. Haugen recommends reforming Section 230 that protects companies from liability for third-party content on their platform. She argues that the government has to step in and companies should be held responsible for the consequences of their algorithms. Even though something has to change, one my ask oneself if government oversight is the right solution. The government regulating algorithms of tech companies could influence journalism and free speech and what consequences would that have?

References

Alter, C. (2021, October 6). How Fixing Facebook’s Algorithm Could Help Teens—and Democracy. Time. https://time.com/6104157/facebook-testimony-teens-algorithm/?utm_source=roundup&utm_campaign=20210929

Deutsche Welle (www.dw.com). (2021). Why Big Tech is under fire around the world | DW | 16.04.2021. DW.COM. https://www.dw.com/en/why-big-tech-is-under-fire-around-the-world/av-57230952

Isaac, M., & Frenkel, S. (2021, October 8). Facebook, Instagram, WhatsApp Were Down: Here’s What to Know. The New York Times. https://www.nytimes.com/2021/10/04/technology/facebook-down.html

Mac, R., & Kang, C. (2021, October 6). Whistle-Blower Says Facebook ‘Chooses Profits Over Safety.’ The New York Times. https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-haugen.html#:%7E:text=Frances%20Haugen%2C%20a%20Facebook%20product,documents%20to%20journalists%20and%20others.&text=%5BWatch%20the%20Facebook%20hearing%20live.%5D

Please rate this