The Terrifying Rise of Deep-Fake Content

17

October

2018

No ratings yet.

Earlier this year the famous actress Drew Barrymore had to deal with some bizarre fabrications of her life that had been published in the magazine of Egyptair: “HORUS”. Celebrities often have to deal with stories about their life that are based on half truths or even lies, but the lengths to which the interviewer went are pretty scary. They accurately photoshopped an original photo of Barrymore holding the magazine “Nisf Al-Donia” and swapped it with their own magazine and unashamedly published it in Egyptair’s magazine. The only reason why this has been discovered is because of the content, Barrymore’s forged answers, did not reflect Barrymore’s life at all. But what if the interviewer had been a bit more clever. Than this magazine article had been gone unnoticed, and had been passed on to passengers without any remarks.

This brings me to the topic I want to address in this blog post. Something that I have been worrying about for quite some time now, that is deep-fake content. Deep-fake learning is an AI based human image synthesis technique used to combine existing with fake images, audio or video into source material. Creating fake content that looks like reality. Deep-fake is mostly used to create fake celebrity or revenge pornography [1]. Of course there are also less harmful use-cases of deep-fake content. For instance to create comedy sketches that are called derpfakes (see below).

From a technological point of view, deep-fake techniques are a marvel of engineering. Pushing the boundaries of what can be done with graphical processors and algorithms. However, deep-fake technologies are sadly enough mostly used in either pornography or even worse, revenge pornography [2]. The latter is really important because celebrities are often well protected due to their popularity but regular people like you and me will find themselves in a much trickier situation. In the UK creating harmful deep-fake material is considered a crime but in other EU member states this is not the case. Recently the ministry of Defence of the United States developed a tool that is designed to catch deep-fakes [3]. But governments are still hesitant on policy making to make deep-fake crimes are specific type of crime. The public is not enough aware of the rising technological possibilities of deep-fakes and thus governments do not make it an priority either. With this blog post I have hoped to give you some insights into this topic and make you aware of the dangers, and convince you that this should be explicitly be made punishable.

For females out there reading this post, please be careful with what you post on social media and how accessible your content is. Numerous studies have shown that females are more often victims of deep-fake content than men.

1 : “What Are Deepfakes & Why the Future of Porn is Terrifying”. Highsnobiety. 2018-02-20. Retrieved 2018-02-20.
2 : https://tweakers.net/nieuws/134449/vervangen-van-gezicht-in-pornovideos-met-ai-neemt-grote-vlucht-door-tool.html
3 : https://www.technologyreview.com/s/611726/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes/

Please rate this

1 thought on “The Terrifying Rise of Deep-Fake Content”

  1. Hi Ivor, really interesting post! Deep-fake content has been new for me, but it makes sense that AI could be abused in this way. Accordingly, I completely agree that deep-fake content should be punishable and should be part of cyber security programs. Seeing the world is becoming more digital, more privacy concerns are raised and this should definitely be one of them.

Leave a Reply

Your email address will not be published. Required fields are marked *