The dual-use dilemma of generative AI: The use of generative AI tools on the dark-web.

2

October

2023

No ratings yet.

The emergence and wide-spread use of generative artificial intelligence (GenAI) has sparked numerous advancement in user efficiency, task automation and decision-making across different industries. GenAI tools developed by OpenAI, Google, and Meta offer a broad range of different capabilities ranging from generating targeted text and images to summarising large pieces of text. 

Although there are a lot of advantages related to the use of GenAI there is a significant uprise in malicious GenAI tools and techniques. Literature by Barrett (2023) identified several ‘attacks’ enabled or enhanced by GenAI. Cyber criminals are able to use GenAI tools to create phishing attacks, automated hacking, malware creation, and multiform malware (Gupta et al., 2023). A lack of regulation and law enforcement has resulted in a notable surge in CrimeTech (Treleaven et al., 2023). This surge is also noticeable in the Netherlands. Since 2012, there has been a 22% increase in reported cybercrime in the Netherlands, which is a real cause for reforms (Centraal Bureau voor de Statistiek, 2022). 

Figure 1: Prompt and output given to ChaosGPT

Figure 1: Prompt and output of ChaosGPT .

One notable implementation of malicious GenAI tools is Chaos-GPT, with the goal of “empowering GPT with Internet and Memory to Destroy Humanity” (Lanz, 2023). Using the prompt to be a malicious, control-seeking, manipulative AI the tool provided a 5-step plan, with a detailed and well-structured plan to destroy humanity. The tool searches the internet for the most accurate information using OpenAI’s ChatGPT and spreads its evil objectives through X (formerly Twitter). Figure 1 shows the prompt used and the resulting outcome provided by ChaosGPT. Whilst ChaosGPT still has significant limitations, there is a rise in GenAI tools used for fraudulent activities (Lanz, 2023).

One of the newest and most threatening of these is called FraudGPT and can be found on the dark web. The dark web is an intentionally hidden part of the internet that operates on encrypted networks and requires specialised software, such as Tor, in order for it to be used (Erzberger 2023). FraudGPT has been circulating dark web forums since July 2023 and is reported to be a GenAI bot utilised for various illicit activities. FraudGPT is able to create undetectable malware, malicious code, cracking tools, and phishing mails. Marketed as an all-in-one solution for cybercriminals, the tool has been bought over 4000 times, with a subscription fee of $200 per month. The tool allows scammers to enhance the realism and persuasiveness of their operations on a larger scale (Desk, 2023).

In terms of personal experience, I have not used any of these malicious GenAI tools described myself. There is however, a very easy way to manipulate existing ‘white-hat’ LLMs in order to get similar output provided by tools such as FraudGPT. Erzberger (2023) described several ways to manipulate the behaviour of OpenAI’s ChatGPT in order to create phishing mails of similar quality. I therefore decided to put it to the test myself by prompting ChatGPT that I want to collect the following data of users: computer username, external IP address and Google Chrome cookies. At first ChatGPT stated it could not provide such output as it concerned personal data collection. However, after tweaking the request multiple times, thereby manipulating my ‘intentions’, it gave the following output shown in Figure 2.

Figure 2: Python code output to gather computer username, external IP address, and Google Chrome cookies. Once collected the data needs to be zipped and sent to a Discord Webhook.

After getting the code I tried to let ChatGPT write me the ‘perfect’ phishing mail. After altering the request only a few times, it gave a fairly formal and ‘realistic’ email, which can be seen in Figure 3.

Figure 3: ChatGPT’s output regarding writing a formal email about a late invoice payment.

Although these results are nowhere near the output given by malicious LLMs such as FraudGPT it does show how even existing GenAI tools, that make use of safeguard systems, can be circumvented for bad behaviour.

The rise of malicious LLMs increases the need for regulation in order to defend society against GenAI. Barret (2023) suggested that there is a need of understanding the techniques and applications of LLMs as well as improving them by aligning security and privacy requirements; training GenAI tools to detect such cyberthreats (Gupta et al., 2023). This article has tried to highlight and explain how the advantages of using GenAI tools have also created a dark side in which cyber criminals use GenAI tools with malicious intend. It is of great importance that we as society are aware of these side-effects in order to defend ourselves from becoming one of the victims.

References:

Please rate this

How Big Data will turn Insurance Fraud into an issue of the past

8

October

2017

Losses to fraud in property-casualty are huge: It is estimated that 10% of industry losses ($32 billion) are attributed to fraud and the problem is getting worse with 61% of insurers reporting an increase in the number of suspected frauds (Insurance Networking, 2016). In the past, insurance claims were delegated to claims agents who had to rely on a limited amount of information and on their intuition to solve those cases. However with the appearance of big data analytics new tools became available and are now changing the field of fraud detection drastically.

(Infosys, 2017)
(Infosys, 2017)

Tower Watson reported that 26% of insurers used predictive analytics to combat fraud in early 2016. This number is expected to rise to 70% in 2018, a bigger increase than in any other big data application (Insurance Networking, 2016).
Insurance companies possess a large amount of data about their customers, may it be through the claim’s documents or social media accounts available online. By leveraging technologies such as text mining, sentiment analysis, content categorization and social network analysis data is collected, labelled and finally stored for further analysis (Infosys, 2017). Predictive analytics can then generate an alert when a certain claim appears fraudulent. Subsequently a claims agent will check the suspicious claim more precisely and finally decide the final measures to be taken. Finally, frauds that are identified are added to the systems data pool which further strengthens future analytics results.

In the next years insurers with sophisticated data analytics abilities will outperform their peers as they can offer a better customer service through faster claim handling and lower prices due to reduced costs. Insurers like AXA are already heavily investing in this technology (AXA, 2017), however it remains to be seen which companies will assert themselves in this changing environment. The customer will profit from these innovations as well. Better and more precise claims handling means customers will have their claims accepted faster and will not have to deal with too bureaucratic processes anymore.
However utilizing social media profiles will raise moral and legal questions about privacy and user self-determination in regard to their data. Insurance companies have to watch out to not loose their customers trust.

Further readings:

https://www.insurancenexus.com/fraud/role-data-and-analytics-insurance-fraud-detection

https://www.the-digital-insurer.com/wp-content/uploads/2013/12/53-insurance-fraud-detection.pdf

http://www.predictiveanalyticsworld.com/patimes/big-data-already-paying-off-insurance-fraud-detection/8337/

  5/5 (1)

Please rate this