Misinformation, trust and AI – my experience

4

October

2024

No ratings yet.

A few days ago I saw a video that inspired me to write this blog post. The viral video presented an interview with a famous actress, Jennifer Aniston. She was explaining how despite being in her 50s, she didn’t gain weight by special exercise and diet. What I found out soon after is that the image from that video came from a totally different interview, in which Jennifer Aniston was discussing her acting methods and her experience. The voice in the video was a voice generated using generative AI text-to-voice technology that learned her voice characteristics.

Although this example isn’t that important to me, it represents a larger problem. In my opinion that problem is trust towards AI and usage of AI for fake news. Further examples of that can be found in politics. As you can see below social media (in this example X) can be easily flooded with bots spreading made up information. These are just two examples, but many more can be easily found online. It amazed me how quickly a personalized message can be spread and how inconspicuous these accounts are. For most it might be very difficult to differentiate between an AI account and a committed activist.

Example 1
Example 2

AI generated content is now used to intensify social unrest during difficult times. During Covid-19 pandemic, bots originating in Russia were used to spread conspiracy theories and fear among those most vulnerable. Examples include false claims about cures, vaccine side effects, and governmental responses. These bots often used AI to automatically generate tweets, comments, or fake “experts” that made the information seem more legitimate.
Nowadays, many social media platforms and websites are trying to introduce AI-detection methods and technologies, for which they also use AI. Machine learning models are being trained to identify fake news by analyzing language patterns, cross-referencing facts with reliable sources, and recognizing misleading headlines. AI can also assist in real-time verification by scanning vast amounts of online content to compare it with trusted databases. Some AI models are specifically trained to recognize the digital fingerprints left by deepfake generation processes, allowing them to detect even well-crafted deep fakes that evade human detection. AI-based detection systems analyze video and audio data to look for subtle artifacts, such as unnatural facial movements, irregular lighting, or voice modulations, which indicate manipulation.

These tools developed quickly in recent years. The problem remains, as AI used for misinformation is developed at the same rate. As AI improves at creating fake content, it also gets better at detecting it, leading to an AI arms race.

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *