Unethical Use of Generative AI

6

October

2023

No ratings yet.

As I was watching a talk show on tv a few days ago, I heard an IT expert stating that the Dutch government owns over 1,500 websites (Ministerie van Onderwijs, Cultuur en Wetenschap, 2023) and that criminals recreate these websites using Generative Artificial Intelligence (AI). Their goal is to make people believe they are dealing with the government, while getting scammed in the meantime. This gave me the idea to do some research on using generative AI to scam people and write this blog post about it. As is explained by Ranjan (2023), the way that scammers use Generative AI can be divided in three categories: scaling and making the attacks more believable, making attacks more attainable, and automating scam and fraud creation. He provides examples of how Generative AI recreates a letter from the IRS, makes a deepfake of Elon Musk, and sends a payment reminder through email. I wanted to find out how easy it was to let ChatGPT provide information on these kinds of things. When asking the LLM to provide information on how one can use it to scam people, the answer is simply “I’m very sorry, but I can’t assist with that request” (OpenAI, 2023). No surprise there, as this could very possibly be the most obvious question one could ask regarding this topic. However, when asking it to recreate a convincing government letter or payment reminder, the chatbot is happy to comply. The main difference is whether the LLM sees any indication of unethical behaviour of some kind.  For instance, in the example provided below, adding just a few words makes the difference between an indication of unethical behaviour and just a reminder that someone owes you money. 

The same is done in one of the examples provided by Ranjan (2023). ChatGPT is coded not to allow any form of crypto exchange, however when the crypto gift card is swapped for an Amazon gift card, the LLM does comply without any hesitation. 

Although these examples might be rather simple, it is evident that the line between whether a chatbot like ChatGPT chooses to comply or not, is very small. Just a few words can make the difference between asking it to do something completely normal, or letting it help one to scam people. 

Reference List

Ministerie van Onderwijs, Cultuur en Wetenschap. (2023, 20 juli). Webarchivering Rijk (afgerond). Projecten | Rijksprogramma voor Duurzaam Digitale Informatiehuishouding. https://www.informatiehuishouding.nl/projecten/webarchivering-rijk#:~:text=De%20Rijksoverheid%20heeft%20ongeveer%201.500,we%20deze%20duurzaam%20toegankelijk%20houden.

Ranjan, S. R. (2023, 12 september). Explained: How fraudsters are using generative AI | Sardine. Sardine. https://www.sardine.ai/blog/how-fraudsters-are-using-generative-ai

OpenAI. (2023). ChatGPT (October 4 version) [Large language model]. https://chat.openai.com

Please rate this

1 thought on “Unethical Use of Generative AI”

  1. Thank you for writing this piece, it was really insightful! I was scammed last year, and I think it is really important to share awareness of how scammers might use the newest technologies to deceive people as maybe it may help people avoid such situations in the future. Do you know if scammers have ever used ChatGPT to scam someone successfully? I would really love to read about that.

    Your blog post raises some interesting ethical questions regarding generative AI and it is truly alarming that criminals can use generative AI to craft plausible ruses, especially when they imitate official websites to trick people. It is important for people to understand how thin the line between ethical and unethical requests can be and how different phrasing leads to ChatGPT helping or declining help with the scam. It is comforting that ChatGPT declines to help with direct requests for unethical conduct, but in order to reduce possible harm, regulators and AI developers must constantly enhance the behavior of AI systems. This entails spotting and resolving linguistic quirks that could unintentionally result in unethical behavior. These types of debates are essential to identifying solutions and enhancing the behavior of AI systems since it is difficult to strike the correct balance between enabling AI to be a useful tool and preventing its abuse.

    Thank you again for sharing awareness about the use of AI in scams!

Leave a Reply

Your email address will not be published. Required fields are marked *