A few days ago I saw a video that inspired me to write this blog post. The viral video presented an interview with a famous actress, Jennifer Aniston. She was explaining how despite being in her 50s, she didn’t gain weight by special exercise and diet. What I found out soon after is that the image from that video came from a totally different interview, in which Jennifer Aniston was discussing her acting methods and her experience. The voice in the video was a voice generated using generative AI text-to-voice technology that learned her voice characteristics.
Although this example isn’t that important to me, it represents a larger problem. In my opinion that problem is trust towards AI and usage of AI for fake news. Further examples of that can be found in politics. As you can see below social media (in this example X) can be easily flooded with bots spreading made up information. These are just two examples, but many more can be easily found online. It amazed me how quickly a personalized message can be spread and how inconspicuous these accounts are. For most it might be very difficult to differentiate between an AI account and a committed activist.
AI generated content is now used to intensify social unrest during difficult times. During Covid-19 pandemic, bots originating in Russia were used to spread conspiracy theories and fear among those most vulnerable. Examples include false claims about cures, vaccine side effects, and governmental responses. These bots often used AI to automatically generate tweets, comments, or fake “experts” that made the information seem more legitimate.
Nowadays, many social media platforms and websites are trying to introduce AI-detection methods and technologies, for which they also use AI. Machine learning models are being trained to identify fake news by analyzing language patterns, cross-referencing facts with reliable sources, and recognizing misleading headlines. AI can also assist in real-time verification by scanning vast amounts of online content to compare it with trusted databases. Some AI models are specifically trained to recognize the digital fingerprints left by deepfake generation processes, allowing them to detect even well-crafted deep fakes that evade human detection. AI-based detection systems analyze video and audio data to look for subtle artifacts, such as unnatural facial movements, irregular lighting, or voice modulations, which indicate manipulation.
These tools developed quickly in recent years. The problem remains, as AI used for misinformation is developed at the same rate. As AI improves at creating fake content, it also gets better at detecting it, leading to an AI arms race.
I really liked your post, especially how you were able to show examples that illustrate this frustrating phenomenon. I often wonder if fingerprinting will work with this specific problem. As most social media interaction is very short in terms of the text that is shared, it might be harder to leave “markers” for another system to detect. For example one of the fingerprinting detection methods relies on ChatGPT’s overuse of the word “however”, which might be hard to shove into every response it gives and could be easily cleared away by a malicious actor.
Thanks Jeremi for the insightful post. I found it very engaging, as this is a topic that I believe everyone should be more aware of. The examples you used were particularly relevant and eye-opening, especially regarding how bots are used to spread misinformation. I’ll definitely be more mindful of this moving forward. One solution I strongly support is enhancing digital literacy education, starting in schools and extending to public awareness efforts, particularly for GenX and older people.
Thank you for sharing your experience Jeremi! The example you provided really shows how easily AI can be misused to spread misinformation. I found the part about bots leaving comments on different posts on social media particularly interesting. I’ll be on the lookout for those rage baiters. The AI “arms race” you mentioned between generating and detecting fake content is an interesting challenge as well. Do you think we’ll ever reach a point where AI detection can fully outpace AI-driven misinformation, or will it always be a reactive game?