AI deepfakes as a threat: The present, future and a possible double-edged sword.

17

October

2023

5/5 (1)

Considering the ever-growing presence of deepfakes online now especially present for me because of the upcoming Dutch election. I’ve been wondering how this tool will develop itself and when will we reach the point that deepfakes become indistinguishable from real for humans. How will we in the future enforce and outlaw videos that are indistinguishable from what’s real for humans?

Within the context of the current day most of us have heard about the potential dangers of deepfakes and the way they can potentially disturb democratic processes (Tilburg University, 2022). The main current-day problem surrounding deepfakes comes from the difficulties surrounding the enforcement of regulations (Ministerie van Justitie en Veiligheid, 2022). Currently, we find ourselves in a situation in which there are still guidelines on how we can detect a deepfake from reality. Examples of how to detect an AI deepfake from reality currently are paying attention to the shadows and paying attention to the face as currently almost all high-quality deepfakes are facial transformations (MIT Media Lab, n.d.). 

The nature of AI tools is for them to be ever-improving as the training data from which the tools stem grows and expands. With growing realism even more questions surrounding the presence of deepfakes in media arise. This combined with the likelihood of the number of deepfakes online keeps increasing (Ministerie van Justitie en Veiligheid, 2022). These factors will also cause questions surrounding regulations and enforcement of these regulations to keep increasing.

One of the answers that is described online by most is the use of AI authenticator tools to detect deepfake from reality. Large corporations are already positioning themselves in developing deepfake detection tools. Examples are Intel’s real-time deepfake detector and Microsoft’s video authenticator tool (McFarland, 2023). It looks likely that the best tool to protect us against AI deepfakes will be an AI detection device. This will 

From this situation stems the part of the discussion which I find most interesting. As we keep progressing with both the creation and detection of deepfake devices. Firstly, there is the question of what the human position will be when we are virtually useless while both creators and detectors scramble to out-develop each other. Secondly, questions arise about embracing this technology or trying to regulate it. This doubled-edged sword of deepfake AI needs answering and needs to be continuously regulated and monitored. Unfortunately with the current state of Dutch politics and the bureaucratic slow-moving nature of government, these questions won’t be answered on the national level. 

References:

Deepfakes kunnen op termijn de samenleving ontwrichten | Tilburg University. (n.d.). Tilburg University. https://www.tilburguniversity.edu/nl/actueel/persberichten/deepfakes-kunnen-op-termijn-de-samenleving-ontwrichten

Project Overview ‹ Detect DeepFakes: How to counteract misinformation created by AI – MIT Media Lab. (n.d.). MIT Media Lab. https://www.media.mit.edu/projects/detect-fakes/overview/

McFarland, A. (2023). 5 Best Deepfake Detector Tools & Techniques (October 2023). Unite.AI. https://www.unite.ai/best-deepfake-detector-tools-and-techniques/

Ministerie van Justitie en Veiligheid. (2022, January 5). Probleem van deepfakes zit niet in wetgeving, maar vooral in handhaving daarvan. Nieuwsbericht | WODC – Wetenschappelijk Onderzoek- En Documentatiecentrum. https://www.wodc.nl/actueel/nieuws/2022/01/05/probleem-van-deepfakes-zit-niet-in-wetgeving-maar-vooral-in-handhaving-daarvan

Please rate this

1 thought on “AI deepfakes as a threat: The present, future and a possible double-edged sword.”

  1. That’s a great topic that you touched over with this post! With the daily improvement of AI tools of various types, I believe we as a society often underestimate the possible outcomes of fraudulent applications of such tools. As you mentioned, this problem is especially acute in politics. With the daily consumption of various content on social media, which often lacks credibility but reaches a wide range of population we have to be extra careful. Since the creation of AI-based content is getting easier every day, anybody now has the power to create misleading information of any kind. Which may lead to destructive power in forming citizens’ political views before important events such as national elections. And I agree with your idea of bringing more focus to this issue and introducing new regulations and laws that will help to control the fraudulent use and spread of such tools. But there needs to be found a careful way of overseeing AI tools and avoiding the application of overly restrictive measures that can harm the overall development of the merging technology.

Leave a Reply

Your email address will not be published. Required fields are marked *