5/5 (2) The Threat of Deepfakes

12

October

2019

Please rate this

Last summer an app called DeepNude caused a lot of controversy in the (social) media. Deepnude was an AI based piece of software with the ability to create a very realistic nude pictures of any uploaded face in the app. Mass criticism followed, the app’s servers got overloaded by curious people and not much later, the app went offline permanently. Deepnude stated on twitter that the probability is misuse was too high and that the world “was not ready yet”. The app never came back online ever since  (Deepnude Twitter, 2019). It shows that deepfake technology is becoming available to the public sooner than we thought, including all potential risks.

A definition for DeepFake is “AI-based technology used to produce or alter video content so that it presents something that didn’t, in fact, occur” (Rouse, 2019). As deepfake is AI-based technology it is able to improve over time, as the amount of data input increases and the technology learns to how to create better output. In my opinion deepfake has an amazing potential in the entertainment industry, but there is a serious risk when the technology gets misused. The AI technology makes it harder and harder for humans to distinguish real videos from fake ones. Deepfake videos of world-leaders like Trump and Putin are already to be found on the internet. Also deepfake porn videos of celebrities are being discovered once in a while.

With the upcoming presidential elections of 2020 in the United States, politicians and and many others are seeking solutions to prevent a similar scenario like the 2017 elections. The 2017 presidential elections were characterized by the spread of fake news and the ongoing allegations resulting from it. These events very likely influenced the outcome of those elections (CNN, 2019). Recently the state of California passed a law which “criminalizes the creation and distribution of video content (as well as still images and audio) that are faked to pass off as genuine footage of politicians. (Winder, 2019).” In 2020 we’ll find out whether deepfakes have been restricted succesfully.

I hope developers and users of deepfake technology will become aware of the huge threats of deepfake, and will use it in a responsible way. It is also important for society to stay critical at their news sources and that they prevent supporting these types of technology misuse. According to Wired (Knight, 2019), Google has released thousands of deepfake videos to be used as AI input to detect other deepfake videos. Another company called Deeptrace is using deep learning and AI in order to detect and monitor deepfake videos (Deeptrace, sd).

See you in 2020…

References

CNN. (2019). 2016 Presidential Election Investigation Fast Facts. Retrieved from CNN: https://edition.cnn.com/2017/10/12/us/2016-presidential-election-investigation-fast-facts/index.html

Deepnude Twitter. (2019). deepnudeapp Twitter. Retrieved from Twitter: https://twitter.com/deepnudeapp

Deeptrace. (n.d.). About Deeptrace. Retrieved from Deeptrace: https://deeptracelabs.com/about/

Knight, W. (2019). Even the AI Behind Deepfakes Can’t Save Us From Being Duped. Retrieved from Wired: https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/

Rouse, M. (2019). What is deepfake (deep fake AI). Retrieved from TechTarget: https://whatis.techtarget.com/definition/deepfake

Winder, D. (2019). Forget Fake News, Deepfake Videos Are Really All About Non-Consensual Porn. Retrieved from Forbes: https://www.forbes.com/sites/daveywinder/2019/10/08/forget-2020-election-fake-news-deepfake-videos-are-all-about-the-porn/#26a929963f99