Weapons of mass destruction – why Uncle Sam wants you.

14

October

2023

No ratings yet.

The Second World War was the cradle for national and geopolitical informational wars, with both sides firing rapid rounds of propaganda at each other. Because of the lack of connectivity (internet), simple pamphlets had the power to plant theories in entire civilizations. In today’s digital age, where everything and everyone is connected, the influence of artificial intelligence on political propaganda cannot be underestimated. This raises concern as, unlike in the Second World War, the informational wars being fought today extend themselves to national politics in almost every first-world country.

Let us take a look at the world’s most popular political battlefield; the US elections; in 2016, a bunch of tweets containing false claims led to a shooting in a pizza shop (NOS, 2016), these tweets had no research backing the information they were transmitting, but fired at the right audience they had significant power. Individuals have immediate access to (mis)information, this is a major opportunity for political powers wanting to gain support by polarising their battlefield.

Probably nothing that I have said to this point is new to you, so shouldn’t you just stop reading this blog and switch to social media to give your dopamine levels a boost? If you were to do that, misinformation would come your way six times faster than truthful information, and you contribute to this lovely statistic (Langin, 2018). This is exactly the essence of the matter, as it is estimated that by 2026, 90% of social media will be AI-generated (Facing reality?, 2022). Combine the presence of AI in social media with the power of fake news, bundle these in propaganda, and add to that a grim conflict like the ones taking place in East Europe or the Middle East right now, and you have got yourself the modern-day weapon of mass destruction, congratulations! But of course, you have got no business in all this so why bother to interfere, well, there is a big chance that you will share misinformation yourself when transmitting information online (Fake news shared on social media U.S. | Statista, 2023). Whether you want it or not, Uncle Sam already has you, and you will be part of the problem.

Artificial intelligence is about to play a significant role in geopolitics and in times of war the power of artificial intelligence is even greater, luckily full potential of these powers hasn’t been reached yet, but it is inevitable that this will happen soon. Therefore, it is essential that we open the discussion not about preventing the use of artificial intelligence in creating conflict and polarising civilisations, but about the use of artificial intelligence to repair the damages it does; to counterattack the false information it is able to generate, to solve conflicts it helps create, and to unite groups of people it divides initially. What is the best way for us to not be part of the problem but part of the solution?

References

Facing reality?: Law Enforcement and the Challenge of Deepfakes : an Observatory Report from the Europol Innovation Lab. (2022).

Fake news shared on social media U.S. | Statista. (2023, 21 maart). Statista. https://www.statista.com/statistics/657111/fake-news-sharing-online/

Langin, K. (2018). Fake news spreads faster than true news on Twitter—thanks to people, not bots. Science. https://doi.org/10.1126/science.aat5350

NOS. (2016, 5 december). Nepnieuws leidt tot schietpartij in restaurant VS. NOS. https://nos.nl/artikel/2146586-nepnieuws-leidt-tot-schietpartij-in-restaurant-vs

Please rate this

1 thought on “Weapons of mass destruction – why Uncle Sam wants you.”

  1. First of all, thank you for this interesting blog! I agree with you that use of AI in spreading misinformation is something that needs to be a wide-spread debate. I firmly believe that the potential harm of AI is much greater than most of us think right now. I was wondering though, what do you propose to be a solution to this growing problem? Some would argue that regulating the use of AI could be an option, however I personally believe that regulating will not give us the proposed solution. I think that we should not only teach people how to use AI, but also teach people how to conduct ‘proper’ research. Addressing the source rather than the tool is, in my eyes, a much more effective way of resolving the issue of misinformation. I am interested in seeing your view on this.

Leave a Reply

Your email address will not be published. Required fields are marked *