Targeted by Something Relentless: The Ethical Implications of AI in Modern Warfare

26

September

2024

No ratings yet.

Illustration by Yoshi Sodeoka

“The advantage will go ‘to those who no longer see the world like humans. We can now be targeted by something relentless, a thing that does not sleep.'”
Army research officials Thom Hawkins and Alexander Kott, 2022 (as cited in Manson, 2024)


In the past decade the use of artificial intelligence (AI) has significantly increased, with by now well-known applications such as facial recognition software, self-driving vehicles, search engines, and translation software. However, next to these ‘peaceful’ uses of AI we now see an increased presence of AI in modern warfare. From an estimated US$ 9.2 billion in 2023, investments are projected to reach US$ 38.8 billion in 2028, a CAGR of over 33% (MarketsandMarkets, 2024). The integration of AI into military operations is revolutionising warfare, basically replacing the threat of nuclear weapons by automated weapon systems, raising profound ethical concerns. As nations like the U.S., Russia and China but also India rapidly develop AI capabilities for their military operations, questions about moral restraint, accountability, and the necessity for international governance become increasingly urgent, specifically as no international legal regulatory framework exists to address these concerns around the use of AI, particularly in the context of conflict.

With regard to AI applications in warfare, a major concern is the erosion of human moral judgment in combat decisions. Renic and Schwarz (2023), building upon Kelman’s (1973) insights into the dehumanisation of warfare, argue that the use of AI-powered targeting systems diminishes meaningful human oversight. It becomes a routinised process, where the sense of responsibility of the executioners is lacking and innocent human beings are falsely identified as combatants. When through AI, machines are made to select and engage targets, the previous existing human element of ethical deliberation is foregone. As restraints on the use of military force is removed, this will then lead to inhumane outcomes. Thus, as warfare becomes depersonalised, it reduces the threshold for initiating conflict and stimulates the use of AI-powered weapons as we see today in Ukraine and Russia.

Accountability is another pressing concern. Michel (2023) points out the difficulties in determining responsibility when AI systems are involved in choices that could mean the difference between life and death. Who is at fault if an autonomous weapon misidentifies a targetᅳthe machine, the commanding officer, or the programmer? This ambiguity poses a risk of establishing a “moral crumple zone” in which moral accountability becomes diminished and justice is unobtainable.

Kluth (2024) states that while AI can enhance military effectiveness, fully autonomous weapons lacking human oversight pose the real ethical threat. In order to maintain ethical standards and prevent unwanted or unintended outcomes, the human element should be overarching. AI as intended should serve as a tool to aid human decision-making, never replace it.

Given these challenges, the call for international governance is critical. As this is greatly lacking today, heightened risks to international peace and security exist. The transformative implications of AI in all areas – not only warfare – need to be managed and regulated.
Csernatoni (2024) specifically underscores this absence of global frameworks regulating military AI. The need to establish international norms and agreements is crucial to ensure responsible deployment and to mitigate security risks associated with unchecked technological advancements. The call for an AI oversight body is therefore becoming stronger and is being (slowly) addressed in guidance and regulations by supervisory bodies.

So, is it ever ethical to delegate life-and-death decisions to AI systems in warfare, even with human oversight? Could AI’s efficiency make ethical shortcomings acceptable? It is important that we start this discussion, as the decisions we make today will have an immense impact on the future of warfare.

References

Csernatoni, R. (2024, July 17). Governing military AI amid a geopolitical minefield. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en&center=europe

Kelman, H. C. (1973). Violence without moral restraint: Reflections on the dehumanization of victims and victimizers. Journal of Social Issues, 29(4), 25–61.

Kluth, A. (2024, March 12). Don’t fear AI in war, fear autonomous weapons. Bloomberg.com. https://www.bloomberg.com/opinion/articles/2024-03-12/don-t-fear-ai-in-war-fear-autonomous-weapons

Manson, K. (2024, February 29). AI warfare becomes real for US military with Project Maven. Bloomberg.com. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/

MarketsandMarkets. (2024, January 4). Artificial Intelligence (AI) in Military Market Size, share, Industry Growth, Trends, and Analysis 2028. https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-military-market-41793495.html

Michel, A. H. (2023, October 18). Inside the messy ethics of making war with machines. MIT Technology Review. https://www.technologyreview.com/2023/08/16/1077386/war-machines/

Renic, N. C. & Schwarz, E. (2023, December 19). Inhuman-in-the-loop: AI-targeting and the erosion of moral restraint. Opinio Juris. https://opiniojuris.org/2023/12/19/inhuman-in-the-loop-ai-targeting-and-the-erosion-of-moral-restraint/

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *