AI chatbots and misinformation

8

October

2023

No ratings yet.

This article comes from an observation. During my previous internship, the media coverage of Chatgpt encouraged me to use the tool for professional purposes. Despite its usefulness for everyday tasks, I quickly realised that it could not serve as a reliable information base. Let’s find out just how imperfect, if not completely false, the information provided by AI is at the moment.

The first thing we know is that the chatGPT database ends in September 2021. As a result, any questions relating to events occurring after that date cannot be answered. Its knowledge is limited to a defined set of information, information that could be oriented as we shall see later. For example, if we ask it who the current British Prime Minister is, it will say Boris Johnson (although a warning message will be displayed).

Google Bard, on the other hand, is directly connected to the Internet, so that the information provided is given as true (in this case, it says that the British Prime Minister is Rishi Sunak).

However, a study published by NewsGuard in August 2023 suggests that these tools have a probability of 80 (Google Bard) to 98% (Chat GPT) of propagating false information. The study points out that Chat GPT is much more insidious, asserting information that is “authoritative-sounding and explicitly false”, whereas Bard offers more context and sources (although these sources do not always guarantee the veracity of the information given). The study, carried out over a 6-month period, also found no improvement (NewsGuard, 2023).

In March 2023, Europol warned that this software could be used for malicious purposes. It would be ideal for propaganda and disinformation purposes because of its ability to quickly produce text. Europol has emphasised the importance of raising awareness in order to make the best use of these new tools (Europol, 2023).

Personally, I think we should be wary of AI chatbots overall. I don’t think they are neutral, even when they provide factual information, because they rely on fixed databases that can influence opinion in one way or another. Since the Facebook-Cambridge Analytica scandal, among others, I think people are more suspicious of the information they are given as true, and I think that is good. AI chatbots are software that should be used sparingly, and it is perfectly possible to give them verified information in the prompt so that they can use it as a basis (even if you have to check the veracity of their answers at the end).

Sources :

Jack Brewster & McKenzie Sadeghisgua. (2023). Red-Teaming Finds OpenAI’s ChatGPT and Google’s Bard Still Spread Misinformation. NewsGuard. https://www.newsguardtech.com/wp-content/uploads/2023/08/NewsGuard-Red-Teaming-Exercise-8AUG2023.pdf

Europol. (2023). ChatGPT The impact of Large Language Models on Law Enforcement. https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf

Please rate this

1 thought on “AI chatbots and misinformation”

  1. Interesting post on a very important topic that can be detrimental to the development of society in the future if not brought under control. Misinformation has been the cause of multiple conflicts throughout the world in the recent past, where posts on social media spread like wildfire and before any measures could be taken to remove such posts, they’ve already caused damage. I agree that such technology should not be used as a sole resource for information gathering. Another form that misinformation can spread is through deep fake technology that can make a political or influential leader say anything that someone desires. Due to the difficulty in detecting such technology, I believe more controls should be in place for using such technology as it can become too overwhelming for ordinary citizens to determine the veracity of information themselves.

Leave a Reply

Your email address will not be published. Required fields are marked *