5/5 (2) The Threat of Deepfakes

12

October

2019

Please rate this

Last summer an app called DeepNude caused a lot of controversy in the (social) media. Deepnude was an AI based piece of software with the ability to create a very realistic nude pictures of any uploaded face in the app. Mass criticism followed, the app’s servers got overloaded by curious people and not much later, the app went offline permanently. Deepnude stated on twitter that the probability is misuse was too high and that the world “was not ready yet”. The app never came back online ever since  (Deepnude Twitter, 2019). It shows that deepfake technology is becoming available to the public sooner than we thought, including all potential risks.

A definition for DeepFake is “AI-based technology used to produce or alter video content so that it presents something that didn’t, in fact, occur” (Rouse, 2019). As deepfake is AI-based technology it is able to improve over time, as the amount of data input increases and the technology learns to how to create better output. In my opinion deepfake has an amazing potential in the entertainment industry, but there is a serious risk when the technology gets misused. The AI technology makes it harder and harder for humans to distinguish real videos from fake ones. Deepfake videos of world-leaders like Trump and Putin are already to be found on the internet. Also deepfake porn videos of celebrities are being discovered once in a while.

With the upcoming presidential elections of 2020 in the United States, politicians and and many others are seeking solutions to prevent a similar scenario like the 2017 elections. The 2017 presidential elections were characterized by the spread of fake news and the ongoing allegations resulting from it. These events very likely influenced the outcome of those elections (CNN, 2019). Recently the state of California passed a law which “criminalizes the creation and distribution of video content (as well as still images and audio) that are faked to pass off as genuine footage of politicians. (Winder, 2019).” In 2020 we’ll find out whether deepfakes have been restricted succesfully.

I hope developers and users of deepfake technology will become aware of the huge threats of deepfake, and will use it in a responsible way. It is also important for society to stay critical at their news sources and that they prevent supporting these types of technology misuse. According to Wired (Knight, 2019), Google has released thousands of deepfake videos to be used as AI input to detect other deepfake videos. Another company called Deeptrace is using deep learning and AI in order to detect and monitor deepfake videos (Deeptrace, sd).

See you in 2020…

References

CNN. (2019). 2016 Presidential Election Investigation Fast Facts. Retrieved from CNN: https://edition.cnn.com/2017/10/12/us/2016-presidential-election-investigation-fast-facts/index.html

Deepnude Twitter. (2019). deepnudeapp Twitter. Retrieved from Twitter: https://twitter.com/deepnudeapp

Deeptrace. (n.d.). About Deeptrace. Retrieved from Deeptrace: https://deeptracelabs.com/about/

Knight, W. (2019). Even the AI Behind Deepfakes Can’t Save Us From Being Duped. Retrieved from Wired: https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/

Rouse, M. (2019). What is deepfake (deep fake AI). Retrieved from TechTarget: https://whatis.techtarget.com/definition/deepfake

Winder, D. (2019). Forget Fake News, Deepfake Videos Are Really All About Non-Consensual Porn. Retrieved from Forbes: https://www.forbes.com/sites/daveywinder/2019/10/08/forget-2020-election-fake-news-deepfake-videos-are-all-about-the-porn/#26a929963f99

 

 

7 thoughts on “The Threat of Deepfakes”

  1. Thank you for the interesting article about deepfakes!

    I am interested in the technology behind deepfakes and would love to hear your thoughts. Do you see any other positive applications of the GAN-technology (Generative Adverserial Networks) besides detecting deepfake videos? And do you think that GAN’s will ever approach human creativity?

    1. Hello Duncan!

      First of all I would like to thank you for your comment. I could (or maybe should) have elaborated the technology behind the fast improving deepfakes in the form of GAN-technology in my article. Therefore I really appreciate your comment.

      And now about the content of your comment. While writing this article I was thinking by myself what positive applications the technology behind deepfakes could have. I could however not get further than using it for entertainment and cultural purposes like the creation of movies and photography. When searching throughout the internet, it mostly stays within that scope. On the following website more examples are provided:
      https://machinelearningmastery.com/impressive-applications-of-generative-adversarial-networks/

      I really think that creative AI technology is still a step too far in the nearby future. Current AI technologies rely heavily on generating output based on the input. I believe that its quite a challenge for AI to generate something it has never analyzed before as an input or combination of inputs.

      One day, human creativity will be approached. However, I believe that we should be patient for quite a while.
      Even IBM calls it the ultimate moonshot.
      https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/ai-creativity.html

  2. I’ve read before that in order to combat the impending threat of Deepfake, several multinational companies, like Facebook and Microsoft put a coalition together to deal with potential attacks and announced a prize pool of $ 10 million to those developers who come up with the best algorithms to detect them. This is in addition to DARPA, the US Department of Defense, which has allocated $ 68 million over the past two years for this purpose. So there are definitely will be counteractions to fight such powerful tool as Deepfake.

    1. Hello Temirkhan! Thank you for your comment!

      Indeed, a lot of money and research time is going into impending the threat of Deepfake. However, “dark” deepfake technology will not disappear because of this. I believe that programmers with bad intentions will even see it as more of a challenge to beat the detecting systems. AI learns from past mistakes, and will therefore in the end improve its output. At a certain point, I believe that deepfake videos will almost be indistinguishable, even for detection systems. Then (or maybe even now) the focus should be shifted to monitoring, tracking and tackling/catching the spreaders of deepfake content.

  3. It is actually quite concerning to me that this technology is going mainstream. Mainly because to the vast majority of the population browsing through their social media feed, deepfakes fool both the visual and auditory systems. It’s going to get increasingly more simple in a couple of years for people to manipulate video content which makes discerning factual footage so much harder. I think it is inevitable, but I also think we are going to pay the price. I’m quite pessimistic of the power of the technology, considering what fake news articles alone could do to influence people’s opinions. Let’s hope we develop sophisticated enough technology where consumers can detect deepfakes as well!

    1. Hello Caleb! Thank you for your comment.

      I understand where your pessimistic view of technology comes from. Just as I (kinda) commented at Temirkhan, we maybe should shift some of the focus to the sources of deepfakes, rather than solely the technology and tracking. Maybe we should consider making deepfakes a crime, unless being allowed on a permit base for specific activities (like entertainment/movie creation), but strictly monitored.

  4. Anouar, thank you for sharing our insights on this interesting topic! I agree with the fact that the adversarial use Artificial Intelligence (AI), although oftentimes exaggerated by general media, is slowly starting to materialize. Aside from the example that you have mentioned in your post, such as the creation of fake material of politicians or celebrities for motives of extortion, entertainment of political misinformation campaigns, deep fakes have recently also be used against corporate networks in elaborated social engineering campaigns. In the example I am referring to, the voice of a U.K. based energy firm’s CEO to convince an employee into transferring them an amount of 220,000€. Specifically, the fraudsters used AI enabled software to mimic the CEO’s accent, voice, and cadence, with which the called an employee of that organization asking for the immediate transfer of the above mentioned sum to one of the firm’s “suppliers” in Hungary. That supplier account however, was in fact controlled by the fraudsters and used to divert the money into several accounts across the globe, making the flow very difficult to trace. [1]
    While using [defensive] Artificial Intelligence to identify and thereby combat [adversarial] Artifical Intelligence, this strategy seems to open up a new unresolvable cat-and-mouse game between fraudsters and ‘defenders’ or ‘fact-checkers’. I think that such technologies like deeptrace, and only then be effectively employed against deepfakes, when used in combination with other strategic security concepts such as the zero-trust principle. In this way, content may only then be published are rated as credible, when the origin of that content can be proven to be a credible source. I am aware that this effort is much easier said than done in reality, however in the context of AI enabled social engineering campaigns, as well as politically motivated misinformation campaigns, zero-trust policies appear increasingly imperative for the verification of content. In the example, I mentioned above, the employee certainly made a mistake by blindly trusting the CEOs claim without checking the legitimacy of bank details mentioned by the caller with the actual supplier of the company. This type of threat will certainly be opening up new trust models between organizations, in which legitimately seeming claims can decreasingly be taken for granted. Whether this approach also applies to the use of deepfakes in the creation of extortion material and misinformation campaigns is more complicated in my opinion, as it would involve intense cooperation of media outlets in verifying their sources. Indeed, many for-profit media outlets may in fact themselves have a vested interest in not “knowing” the source of origin or the “deefake” nature of the content they publish, as such content may easily become viral. I think that from this perspective, the biggest threat of deepfakes to society is its potential implications on free speech, wherefore significant resources should be invested in technologies such as deeptrace.
    I finally highly agree with your conclusion in which you state that the way in which AI is used ultimately lies in the responsibility of its developers, and that such must be made aware of the potential ways their software products may be abused by adversaries before publishing them or developing them in the first place.

    [1] https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

Leave a Reply

Your email address will not be published. Required fields are marked *