Targeted by Something Relentless: The Ethical Implications of AI in Modern Warfare

26

September

2024

No ratings yet.

Illustration by Yoshi Sodeoka

“The advantage will go ‘to those who no longer see the world like humans. We can now be targeted by something relentless, a thing that does not sleep.'”
Army research officials Thom Hawkins and Alexander Kott, 2022 (as cited in Manson, 2024)


In the past decade the use of artificial intelligence (AI) has significantly increased, with by now well-known applications such as facial recognition software, self-driving vehicles, search engines, and translation software. However, next to these ‘peaceful’ uses of AI we now see an increased presence of AI in modern warfare. From an estimated US$ 9.2 billion in 2023, investments are projected to reach US$ 38.8 billion in 2028, a CAGR of over 33% (MarketsandMarkets, 2024). The integration of AI into military operations is revolutionising warfare, basically replacing the threat of nuclear weapons by automated weapon systems, raising profound ethical concerns. As nations like the U.S., Russia and China but also India rapidly develop AI capabilities for their military operations, questions about moral restraint, accountability, and the necessity for international governance become increasingly urgent, specifically as no international legal regulatory framework exists to address these concerns around the use of AI, particularly in the context of conflict.

With regard to AI applications in warfare, a major concern is the erosion of human moral judgment in combat decisions. Renic and Schwarz (2023), building upon Kelman’s (1973) insights into the dehumanisation of warfare, argue that the use of AI-powered targeting systems diminishes meaningful human oversight. It becomes a routinised process, where the sense of responsibility of the executioners is lacking and innocent human beings are falsely identified as combatants. When through AI, machines are made to select and engage targets, the previous existing human element of ethical deliberation is foregone. As restraints on the use of military force is removed, this will then lead to inhumane outcomes. Thus, as warfare becomes depersonalised, it reduces the threshold for initiating conflict and stimulates the use of AI-powered weapons as we see today in Ukraine and Russia.

Accountability is another pressing concern. Michel (2023) points out the difficulties in determining responsibility when AI systems are involved in choices that could mean the difference between life and death. Who is at fault if an autonomous weapon misidentifies a targetᅳthe machine, the commanding officer, or the programmer? This ambiguity poses a risk of establishing a “moral crumple zone” in which moral accountability becomes diminished and justice is unobtainable.

Kluth (2024) states that while AI can enhance military effectiveness, fully autonomous weapons lacking human oversight pose the real ethical threat. In order to maintain ethical standards and prevent unwanted or unintended outcomes, the human element should be overarching. AI as intended should serve as a tool to aid human decision-making, never replace it.

Given these challenges, the call for international governance is critical. As this is greatly lacking today, heightened risks to international peace and security exist. The transformative implications of AI in all areas – not only warfare – need to be managed and regulated.
Csernatoni (2024) specifically underscores this absence of global frameworks regulating military AI. The need to establish international norms and agreements is crucial to ensure responsible deployment and to mitigate security risks associated with unchecked technological advancements. The call for an AI oversight body is therefore becoming stronger and is being (slowly) addressed in guidance and regulations by supervisory bodies.

So, is it ever ethical to delegate life-and-death decisions to AI systems in warfare, even with human oversight? Could AI’s efficiency make ethical shortcomings acceptable? It is important that we start this discussion, as the decisions we make today will have an immense impact on the future of warfare.

References

Csernatoni, R. (2024, July 17). Governing military AI amid a geopolitical minefield. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en&center=europe

Kelman, H. C. (1973). Violence without moral restraint: Reflections on the dehumanization of victims and victimizers. Journal of Social Issues, 29(4), 25–61.

Kluth, A. (2024, March 12). Don’t fear AI in war, fear autonomous weapons. Bloomberg.com. https://www.bloomberg.com/opinion/articles/2024-03-12/don-t-fear-ai-in-war-fear-autonomous-weapons

Manson, K. (2024, February 29). AI warfare becomes real for US military with Project Maven. Bloomberg.com. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/

MarketsandMarkets. (2024, January 4). Artificial Intelligence (AI) in Military Market Size, share, Industry Growth, Trends, and Analysis 2028. https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-military-market-41793495.html

Michel, A. H. (2023, October 18). Inside the messy ethics of making war with machines. MIT Technology Review. https://www.technologyreview.com/2023/08/16/1077386/war-machines/

Renic, N. C. & Schwarz, E. (2023, December 19). Inhuman-in-the-loop: AI-targeting and the erosion of moral restraint. Opinio Juris. https://opiniojuris.org/2023/12/19/inhuman-in-the-loop-ai-targeting-and-the-erosion-of-moral-restraint/

Please rate this

2 thoughts on “Targeted by Something Relentless: The Ethical Implications of AI in Modern Warfare”

  1. Interesting post Francois, this was a topic that was also discussed during a course in my bachelors. There is an article from Taylor (2021) in which the responsibility gap of autonomous weapons is discussed [1]. The author proposes group responsibility, meaning that a group of agents within the Military-Industrial Complex, or the Military-Industrial Complex as a whole should be held accountable [1]. He argues that the organizations who design and deploy such weapons can be seen as having control of the outcome when such weapons are being used [1]. Looking at that line of argument, I believe it is a reasonable view. It is certainly better than no accountability at all.

    [1] Taylor, I.: Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military-Industrial Complex. Journal of Applied Philosophy. vol. 38. pp. 320–334. (2021)

  2. Thank you for the insights Francois! It reminded me of an article I read in the Economist earlier this summer, they did their cover story on this topic. The following sentence really stuck with me: “Armies will fear that if they do not give their AI advisers a longer leash, they will be defeated by an adversary who does.” (Economist, 2024) It is all very reminiscent of the cold war, but in this scenario I fail to come up with positive outcomes. Implementing oversight guides and ethics codes is a necessary base, but only if all players on the world stage adhere to these, can they have any impact. And going back to the quote from the Economist I think everyone is too afraid to give up their advantage. I would argue that it can never be ethical to hand over life-and-death decisions to AI completely, at least not currently, but I am afraid that this just not how decision makers in the defence industry think.

    The Economist. (2024, June 20). AI will transform the character of warfare. The Economist. https://www.economist.com/leaders/2024/06/20/war-and-ai

Leave a Reply

Your email address will not be published. Required fields are marked *