BOTS: Our new digital Enemy

24

September

2025

5/5 (1)

Imagine you are scrolling down on your favourite social media and check the post of some famous user you admire and see some negative comments with misinformation. You go straight into proving that person wrong and start a debate, what if I told you there is 50% chance you are arguing with a Bot, crazy right? You spent all that energy and emotional response on something that does not have a conscience. But you might be wondering, what is a bot? It is essentially an automated software designed to perform certain tasks. It is one of the most common outcomes and practices for the use of AI on the internet.
Disruptive Innovation? Yes. Dangerous? Most definitely.
The fact that over 50% of online activity derives from Bots is highly concerning raising questions on if the Disruptive Innovation of AI is highly dangerous for internet users. Scams, Fraud, Identity theft, and Misinformation spread are among the problems one must deal with when accessing the internet. Notoriously, Misinformation on the internet has been one of the most concerning dangers that has risen in an unprecedented manner. Propaganda fuelling extremism on the right and left has risen due to this, ideological conflicts have become more tense, and political disinformation has increased immensely. The owners of these bots own ”Bot Farms” that attack any current topic and account with the press of a touch all while in disguise as real human beings. Consequently, luring people into dangerous perspectives. The underground market has become very large to the point where governments are involved in hopes of shifting the global political landscape to benefit them. A truly dystopic feeling brews silently if regulations and cybersecurity can’t keep up with the fast emergence of Bots that each day become smarter. Celebrate AI, beware the future.

Please rate this

6 thoughts on “BOTS: Our new digital Enemy”

  1. I found it really interesting how you described the threat of bots so vividly. The idea that over 50% of online activity could come from bots is truly shocking. I agree with the risks of misinformation and propaganda, but I also think human behavior – such as the careless sharing of content – plays just as big of a role. Maybe the debate shouldn’t only focus on regulating AI, but also on how users themselves engage more responsibly with information.

    1. 100%! We have to put our critical thinking to use and be able to distinguish the truth from the false. This urges our institutions to raise awareness on misinformation and also not agree with a false message just because it resonates with our opinions. I feel like our generation is significantly resilient to this but previous generations and even newer generations are more susceptible to be misinformed.

  2. That’s a powerful and necessary warning, and the dystopian feeling it describes is very real.

    To add a bit of nuance to this dark picture, it’s also worth remembering that this is an ongoing arms race, not a one-sided defeat. The same AI technology that powers these malicious bots is also being used to build incredibly sophisticated tools to detect fake accounts, identify coordinated inauthentic behavior, and help users spot misinformation

    1. Totally agree! Not everything is negative and the potential on it is fascinating, however, many of these systems are machine learning languages and for them to address malicious use they need to learn from their existence. This can be seen similarly to the comical Cat and Mouse situation where one is always chasing the other but can get overwhelmed by the creation of new methods to counter the chaser. Lets hope this can change for the better and change the tides of problems in our favour of the positive use of AI so it can prosper as a companion for us instead of a destabiliser.

  3. Hi Bruno, interesting blog!

    If you’re okay with it, I’d like to raise some questions. First, is the “50% of online activity derives from Bots” statement based on real research? Or is it based more on your personal experience? Second, what is the incentive for the owners of bots to engage in these so-called “Bot Farms”? What’s in it for them?

    You also mentioned that cybersecurity can’t keep up with the fast emergence of Bots. Are there government regulations to defend companies/people from bots? Because I think it would be very worrying if not!

    1. Hi Floris! I initially heard about the 50% activity online through podcasts and videos. I decided to fact check it because it seemed ludicrous but after a bit of research I was stunned! You can check for your self in Imperva Bad Bot Report (2025) 🙂 regarding the Bot Farms they basically function the same way as an outrage mob trying to spread a message and chaos but online. In most cases is hateful and false content which is extremely worrying. It is a destabilising mechanism. The government should add more regulations to AI to also help cyber security to catch up but that means a decrease in innovation and for several governments, the loss of terrain in AI supremacy which we can see between the US and China more notably.

Leave a Reply to Diderick Magermans Cancel reply

Your email address will not be published. Required fields are marked *