AI in Humanitarian Aid

13

October

2023

No ratings yet.

While I was looking for original topics to write a blog post, I came across the use of AI in humanitarian Aid, especially regarding natural disasters, refugees, and wars. There are multiple areas on which AI can support humanitarian aid, one of them being its predictive capabilities. These capabilities are redefining disaster management by using advanced data analysis and machine learning algorithms. This means AI can fore AI can forecast disasters (Bates, 2017) such as earthquakes, tsunamis, floods, and hurricanes by analyzing diverse data sources like weather patterns, seismic activities, and historical disaster records. These accurate predictions enable timely evacuation and preparation, potentially saving countless lives (Fernández-Luque & Imran, 2018).

Furthermore, search and rescue operations can be improved by using AI-powered drones to assess damages, inspect collapsed buildings, and locate survivors (Basali, 2023). These drones enhance the speed and accuracy of search and rescue missions, potentially saving more lives and reducing response time. Also, AI can help optimising things like resource allocation through machine learning algorithms that process data on affected areas, population density, and resource availability. This allows humanitarian organisations to efficiently distribute resources like food, water, and medical supplies to those in need.

The previously mentioned aspects are often regarded as the foremost priorities after a disaster or conflict. However, what unfolds in the aftermath tends to be overlooked. Once the fundamental needs are addressed, new challenges emerge. One of the biggest challenges refugees must deal with every day are language barriers (Open Cultural Center, 2021). Language barriers can impede communication between aid workers and refugees. AI-powered language translation tools break down these barriers, facilitating seamless communication. The use of complex algorithms and machine learning techniques allows for understanding the essence of the source text and producing a precise translation in a certain language. AI-generated translation revolutionises language translation by swiftly and accurately processing vast amounts of text, in contrast to translating a sentence word for word (AIContentfy Team, 2023).

It can be concluded that although AI is not a replacement for human intervention, it is a very powerful ally in disaster response. As AI continues to improve its capacity to process and analyze data, it empowers humanitarian organizations to respond more efficiently to disasters. Along with the rapid technological innovation, our ability to aid those in need will only grow, and the potential of Artificial Intelligence in humanitarian aid will continue to unfold.

References

Bates, M. A. (2017). Tracking Disease: Digital epidemiology offers new promise in predicting outbreaks. IEEE Pulse8(1), 18–22. https://doi.org/10.1109/mpul.2016.2627238

Basali. (2023, September 18). Drones for Search and Rescue Operations – SAR Drones. flytbasehttps://www.flytbase.com/blog/drones-for-search-rescue#:~:text=Detection%20and%20Identification%3A%20Drones%20can,the%20dark%20or%20dense%20areas.

Fernández-Luque, L., & Imran, M. (2018). Humanitarian Health Computing Using Artificial Intelligence and Social Media: A Narrative Literature review. International Journal of Medical Informatics114, 136–142. https://doi.org/10.1016/j.ijmedinf.2018.01.015

Open Cultural Center. (2021, August 6). Language barriers and the importance of language learning for refugee and migrant communities in Europe. Open Cultural Centerhttps://openculturalcenter.org/language-barriers-and-the-importance-of-language-learning-for-refugee-and-migrant-communities-in-europe/

AIContentfy Team. (2023). AI-generated content for language translation. AIContentfyhttps://aicontentfy.com/en/blog/ai-generated-content-for-language-translation#:~:text=Government%20and%20humanitarian%20aid%3A%20In,aid%2C%20and%20other%20essential%20services.

Please rate this

Unethical Use of Generative AI

6

October

2023

No ratings yet.

As I was watching a talk show on tv a few days ago, I heard an IT expert stating that the Dutch government owns over 1,500 websites (Ministerie van Onderwijs, Cultuur en Wetenschap, 2023) and that criminals recreate these websites using Generative Artificial Intelligence (AI). Their goal is to make people believe they are dealing with the government, while getting scammed in the meantime. This gave me the idea to do some research on using generative AI to scam people and write this blog post about it. As is explained by Ranjan (2023), the way that scammers use Generative AI can be divided in three categories: scaling and making the attacks more believable, making attacks more attainable, and automating scam and fraud creation. He provides examples of how Generative AI recreates a letter from the IRS, makes a deepfake of Elon Musk, and sends a payment reminder through email. I wanted to find out how easy it was to let ChatGPT provide information on these kinds of things. When asking the LLM to provide information on how one can use it to scam people, the answer is simply “I’m very sorry, but I can’t assist with that request” (OpenAI, 2023). No surprise there, as this could very possibly be the most obvious question one could ask regarding this topic. However, when asking it to recreate a convincing government letter or payment reminder, the chatbot is happy to comply. The main difference is whether the LLM sees any indication of unethical behaviour of some kind.  For instance, in the example provided below, adding just a few words makes the difference between an indication of unethical behaviour and just a reminder that someone owes you money. 

The same is done in one of the examples provided by Ranjan (2023). ChatGPT is coded not to allow any form of crypto exchange, however when the crypto gift card is swapped for an Amazon gift card, the LLM does comply without any hesitation. 

Although these examples might be rather simple, it is evident that the line between whether a chatbot like ChatGPT chooses to comply or not, is very small. Just a few words can make the difference between asking it to do something completely normal, or letting it help one to scam people. 

Reference List

Ministerie van Onderwijs, Cultuur en Wetenschap. (2023, 20 juli). Webarchivering Rijk (afgerond). Projecten | Rijksprogramma voor Duurzaam Digitale Informatiehuishouding. https://www.informatiehuishouding.nl/projecten/webarchivering-rijk#:~:text=De%20Rijksoverheid%20heeft%20ongeveer%201.500,we%20deze%20duurzaam%20toegankelijk%20houden.

Ranjan, S. R. (2023, 12 september). Explained: How fraudsters are using generative AI | Sardine. Sardine. https://www.sardine.ai/blog/how-fraudsters-are-using-generative-ai

OpenAI. (2023). ChatGPT (October 4 version) [Large language model]. https://chat.openai.com

Please rate this