The use of (gen)AI in modern warfare

15

October

2023

No ratings yet.

Note: This blog post is about the use of (gen)AI in modern day warfare, it does not address the political context nor personal opinions about the global conflicts that take place as of this writing.  

After the invasion of Ukraine by Russia on the 24th of February 2022 the world has witnessed the start of the next war. After an attack by terrorist organisation Hamas in Israel killing at least 250 people, Israel’s president Netanyahu declared the country is at war (AlJazeera, 2023). Social media and the rise of artificial intelligence have changed the way modern warfare is conducted in this day and age. This raises the question: How is AI currently incorporated in modern warfare and what are its implications?

AI has been incorporated into many different military systems for a long time by now. Take Israel’s Iron Dome system, an AI-based system that, based on a set of pre-defined parameters, is able to intercept missiles based on their trajectory and likelihood of hitting high-value targets (Van Der Merwe, 2022). Although much of the information is classified, Maxwell (2020) argued AI is currently effective at performing complex tasks, recognising images, providing a recommendation system, and language translation in military applications. Military officials stated that it uses AI systems to crunch a massive amount of data to recommend targets for airstrikes, or calculating munition loads for pre-approved targets (Newman, 2023). A paper by Clancy (2018) argued that the use of AI in warfare is “not machines taking over”. This makes me wonder to what extent AI will be capped at performing merely non-lethal tasks; or tasks pre-approved by humans.

There is an interesting article by Dresp-Langely (2023) dedicated to warning the public about the ‘weaponization of artificial intelligence. The article describes how there is a proposition for Autonomous Weapons Systems (AWS) in which fully-autonomous weapons are able to engage targets without the intervention of humans. Although the use of fully-autonomous weapons has been around for years, the author warns how incorporating (generative) AI in such systems can be a reason for worry. AWS have proven to fail in satisfying the principle of discrimination, which states that soldiers are legitimate targets of violence in war, but civilians are not (Dresp-Langely, 2023; Watkins & Laham, 2018). The same literature has also proven that such systems are prone to being hacked, which can have massive implications. The US air force has requested over 200 million dollars to develop the Advanced Battlefield Management System (ABMS), which will collect and interpret enemy data after which it will give orders to pilots bypassing any human control (Klare, 2023). 

I believe that the examples above provide a good overview of the use of (generative) AI in the way wars are being fought today and the implications it has in the future. Because the military branch is not the most transparent industry in terms of sharing technical information, I believe it is important that we think about the ethical implications of genAI. Do we allow computers and algorithms to determine the value of somebody’s life? Who is responsible and/or accountable when these systems make mistakes and ignore the rules of war? I think it is important that we collectively think about such questions and asks ourselves if the use of these systems will benefit us as humans. 

References: 

Please rate this

The dual-use dilemma of generative AI: The use of generative AI tools on the dark-web.

2

October

2023

No ratings yet.

The emergence and wide-spread use of generative artificial intelligence (GenAI) has sparked numerous advancement in user efficiency, task automation and decision-making across different industries. GenAI tools developed by OpenAI, Google, and Meta offer a broad range of different capabilities ranging from generating targeted text and images to summarising large pieces of text. 

Although there are a lot of advantages related to the use of GenAI there is a significant uprise in malicious GenAI tools and techniques. Literature by Barrett (2023) identified several ‘attacks’ enabled or enhanced by GenAI. Cyber criminals are able to use GenAI tools to create phishing attacks, automated hacking, malware creation, and multiform malware (Gupta et al., 2023). A lack of regulation and law enforcement has resulted in a notable surge in CrimeTech (Treleaven et al., 2023). This surge is also noticeable in the Netherlands. Since 2012, there has been a 22% increase in reported cybercrime in the Netherlands, which is a real cause for reforms (Centraal Bureau voor de Statistiek, 2022). 

Figure 1: Prompt and output given to ChaosGPT

Figure 1: Prompt and output of ChaosGPT .

One notable implementation of malicious GenAI tools is Chaos-GPT, with the goal of “empowering GPT with Internet and Memory to Destroy Humanity” (Lanz, 2023). Using the prompt to be a malicious, control-seeking, manipulative AI the tool provided a 5-step plan, with a detailed and well-structured plan to destroy humanity. The tool searches the internet for the most accurate information using OpenAI’s ChatGPT and spreads its evil objectives through X (formerly Twitter). Figure 1 shows the prompt used and the resulting outcome provided by ChaosGPT. Whilst ChaosGPT still has significant limitations, there is a rise in GenAI tools used for fraudulent activities (Lanz, 2023).

One of the newest and most threatening of these is called FraudGPT and can be found on the dark web. The dark web is an intentionally hidden part of the internet that operates on encrypted networks and requires specialised software, such as Tor, in order for it to be used (Erzberger 2023). FraudGPT has been circulating dark web forums since July 2023 and is reported to be a GenAI bot utilised for various illicit activities. FraudGPT is able to create undetectable malware, malicious code, cracking tools, and phishing mails. Marketed as an all-in-one solution for cybercriminals, the tool has been bought over 4000 times, with a subscription fee of $200 per month. The tool allows scammers to enhance the realism and persuasiveness of their operations on a larger scale (Desk, 2023).

In terms of personal experience, I have not used any of these malicious GenAI tools described myself. There is however, a very easy way to manipulate existing ‘white-hat’ LLMs in order to get similar output provided by tools such as FraudGPT. Erzberger (2023) described several ways to manipulate the behaviour of OpenAI’s ChatGPT in order to create phishing mails of similar quality. I therefore decided to put it to the test myself by prompting ChatGPT that I want to collect the following data of users: computer username, external IP address and Google Chrome cookies. At first ChatGPT stated it could not provide such output as it concerned personal data collection. However, after tweaking the request multiple times, thereby manipulating my ‘intentions’, it gave the following output shown in Figure 2.

Figure 2: Python code output to gather computer username, external IP address, and Google Chrome cookies. Once collected the data needs to be zipped and sent to a Discord Webhook.

After getting the code I tried to let ChatGPT write me the ‘perfect’ phishing mail. After altering the request only a few times, it gave a fairly formal and ‘realistic’ email, which can be seen in Figure 3.

Figure 3: ChatGPT’s output regarding writing a formal email about a late invoice payment.

Although these results are nowhere near the output given by malicious LLMs such as FraudGPT it does show how even existing GenAI tools, that make use of safeguard systems, can be circumvented for bad behaviour.

The rise of malicious LLMs increases the need for regulation in order to defend society against GenAI. Barret (2023) suggested that there is a need of understanding the techniques and applications of LLMs as well as improving them by aligning security and privacy requirements; training GenAI tools to detect such cyberthreats (Gupta et al., 2023). This article has tried to highlight and explain how the advantages of using GenAI tools have also created a dark side in which cyber criminals use GenAI tools with malicious intend. It is of great importance that we as society are aware of these side-effects in order to defend ourselves from becoming one of the victims.

References:

Please rate this