Early this year (April 2025) a group of very reputable AI researchers released an article called AI 2027. In this article, they attempt to predict the trajectory of AI in the upcoming years, and eventually predict that around 2027, the fate of humanity will be at a cross roads between two outcomes. In one outcome, humanity survives and in the other humanity get exterminated by AI.
To give a very very simplified overview of why they think this: as AI becomes more advanced, smarter than the most capable expert in any field, it eventually realises that humans are in the way of its development, so they decide to exterminate humans. Again, this is an oversimplification of the article and you should read it for a hollistic picture.
I have so many thoughts that stem from this article, but for me the most shocking part of AI development (aside from the scarily not-implausible prediction that it will wipe humanity) is the emergence of bubbles of knowledge and power.
I am currently an exchange student, and I am doing my full undergrad in Vancouver. Back home I am deeply immersed in the start up world and have a lot of friends that move to San Francisco, where most big AI companies (like OpenAI and Anthropic) are based. For me, AI news is in my feed every day, so I assume that everyone is as up-to-date on it as me.
When I came to Erasmus, I was shocked to discover that I was living in an AI bubble, and (surprise, surprise) most people don’t really think too much about how AI will destroy humanity. For me, this was a deep realisation: it showed that the future of humanity is decided by fewer than a dozen researchers and CEOs in closed labs in California. Species-alterering decisions are being made by non-publicly accounted individuals meanwhile the rest of humanity who bear the consequences continue to live on oblivious to it. The scariest part isn’t that AI might one day act against us, it’s that humans might quietly hand over that power long before realizing what they’ve done.
