Note: This blog post is about the use of (gen)AI in modern day warfare, it does not address the political context nor personal opinions about the global conflicts that take place as of this writing.
After the invasion of Ukraine by Russia on the 24th of February 2022 the world has witnessed the start of the next war. After an attack by terrorist organisation Hamas in Israel killing at least 250 people, Israel’s president Netanyahu declared the country is at war (AlJazeera, 2023). Social media and the rise of artificial intelligence have changed the way modern warfare is conducted in this day and age. This raises the question: How is AI currently incorporated in modern warfare and what are its implications?
AI has been incorporated into many different military systems for a long time by now. Take Israel’s Iron Dome system, an AI-based system that, based on a set of pre-defined parameters, is able to intercept missiles based on their trajectory and likelihood of hitting high-value targets (Van Der Merwe, 2022). Although much of the information is classified, Maxwell (2020) argued AI is currently effective at performing complex tasks, recognising images, providing a recommendation system, and language translation in military applications. Military officials stated that it uses AI systems to crunch a massive amount of data to recommend targets for airstrikes, or calculating munition loads for pre-approved targets (Newman, 2023). A paper by Clancy (2018) argued that the use of AI in warfare is “not machines taking over”. This makes me wonder to what extent AI will be capped at performing merely non-lethal tasks; or tasks pre-approved by humans.
There is an interesting article by Dresp-Langely (2023) dedicated to warning the public about the ‘weaponization of artificial intelligence. The article describes how there is a proposition for Autonomous Weapons Systems (AWS) in which fully-autonomous weapons are able to engage targets without the intervention of humans. Although the use of fully-autonomous weapons has been around for years, the author warns how incorporating (generative) AI in such systems can be a reason for worry. AWS have proven to fail in satisfying the principle of discrimination, which states that soldiers are legitimate targets of violence in war, but civilians are not (Dresp-Langely, 2023; Watkins & Laham, 2018). The same literature has also proven that such systems are prone to being hacked, which can have massive implications. The US air force has requested over 200 million dollars to develop the Advanced Battlefield Management System (ABMS), which will collect and interpret enemy data after which it will give orders to pilots bypassing any human control (Klare, 2023).
I believe that the examples above provide a good overview of the use of (generative) AI in the way wars are being fought today and the implications it has in the future. Because the military branch is not the most transparent industry in terms of sharing technical information, I believe it is important that we think about the ethical implications of genAI. Do we allow computers and algorithms to determine the value of somebody’s life? Who is responsible and/or accountable when these systems make mistakes and ignore the rules of war? I think it is important that we collectively think about such questions and asks ourselves if the use of these systems will benefit us as humans.
References:
- AlJazeera, A. (2023, October 9). What does Israel’s declaration of war mean for Palestinians in Gaza? Gaza News | Al Jazeera. https://www.aljazeera.com/news/2023/10/9/what-does-israels-declaration-of-war-mean-for-palestinians-in-gaza#:~:text=Israeli%20Prime%20Minister%20Benjamin%20Netanyahu,%E2%80%9CWe%20are%20at%20war.
- Clancy, J. P. (2018). Artificial intelligence and modern warfare. www.academia.edu. https://www.academia.edu/37454857/Artificial_Intelligence_and_Modern_Warfare
- Dresp-Langley, B. (2023). The weaponization of artificial intelligence: What the public needs to be aware of. Frontiers in Artificial Intelligence, 6. https://doi.org/10.3389/frai.2023.1154184
- Klare, M. T. K. (2023, July 17). The Future of AI Is War | The Nation. The Nation. Retrieved October 15, 2023, from https://www.thenation.com/article/world/artificial-intelligence-us-military/
- Maxwell, P. (2020, April 20). Artificial Intelligence is the Future of Warfare (Just Not in the Way You Think) – Modern War Institute. Modern War Institute. https://mwi.westpoint.edu/artificial-intelligence-future-warfare-just-not-way-think/
- Newman, M. (2023, July 16). Israel using AI systems to plan deadly military operations. Bloomberg.com. https://www.bloomberg.com/news/articles/2023-07-16/israel-using-ai-systems-to-plan-deadly-military-operations#xj4y7vzkg
- Van Der Merwe, J. (2022). Iron Dome shows AI’s risks and rewards. CEPA. https://cepa.org/article/iron-dome-shows-ais-risks-and-rewards/
- Watkins, H. M., & Laham, S. M. (2018). The principle of discrimination: Investigating perceptions of soldiers. Group Processes & Intergroup Relations, 23(1), 3–23. https://doi.org/10.1177/1368430218796277
Excellent blog! The title and picture already drew my attention and when I saw that you used a relevant topic in today’s world I really wanted to read it. War is always wrong, so why create new weapons with the use of AI or with AI implemented to improve your weapons for something that is wrong? The Oppenheimer movie already stated the dangers of creating the atomic bomb as it could destroy the world. Hopefully, AI is not the next thing that has this potential. Therefore, I really like that you brought up this topic and optimistically this blog will cause the ethical implications of AI in modern warfare to become very strict.
Hi Joey, thanks for you comment. I think that there are two arguments for and against using AI in warfare. On the one hand it increases decision-making time and will eventually cause less casualties on the attacking side. I agree however with your link towards the Oppenheimer movie, which clearly shows that weapons of mass destruction do not benefit society in any way. Especially in an industry indulged by secrecy there is a need for transparency concerning the potential of AI-enhanced systems in warfare.