Is Seeing still Believing? The Danger of Deepfakes

29

September

2020

No ratings yet.

Fake news is a “hot topic” for the last few years, although fake news itself is way older than that. Lee (2019) states that the phenomenon is around for 125 years, but recently took a new meaning. The rise of the internet as a source of information and the fact that literally, everybody can post content online to reach an audience is debit to the enormous increase in fake news, as well as to the impact it has by the way it can be spread very rapidly through social media channels like Facebook and Twitter. Most people do not read more accurately due to the information overload they receive daily, skim reading seems to be the new normal, and quite a lot of people still automatically tend to believe most of what is written with a certain kind of authority (Wolf, 2018). Fortunately, some people are aware of the dangers of disinformation that are spreading and are actively checking multiple sources to see before they believe.

If, for example, the president said something controversial according to a fake news source, one could check the videos of the presidents’ speech for themselves to see and hear if he said something like that indeed, and if so, in which context. It takes a little effort, but things can be checked relatively easily. This changes with the upcoming of Deepfakes, fake video’s that are edited by a particular subset of artificial intelligence (deep learning), hence the name (Roughol, 2019). The AI uses real images of humans to learn to create fake images of humans, more realistically than any other technique you’ve seen before (Scott, 2019).

See if anything looks off to you in this short scene of The Matrix:

Scott (2019) shows that the current state of the technique still reveals to the trained eye that something is not completely genuine, but this might change with technological advancements. The same thing can be done with audio. Nowadays, only a very short recording of a voice is needed to replicate a voice with bewildering accuracy. According to Ongweso (2019), some thieves used synthetic audio with which the voice of a CEO was imitated, to make his subordinate transfer around a quarter million dollars to a secret Hungarian bank account. Imagine what damage this can do, in many ways (think about the upcoming elections), especially in combination with video. Low awareness of possibilities of techniques like these is very dangerous, but even if awareness is high, if there is no good way to detect deepfakes, it will be very problematic anyhow and at the moment the capacity to generate Deepfakes is proceeding much faster than the ability to detect them (Galston, 2020).

I could go on talking about this, but I’ll end this blog by a quote from Scott (2019) who is very sceptical about the technological advancements we’ve made in the last years:

“Political operatives will use behavioural and persuasion algorithms that the average user is oblivious to. Add to this a layer of Deepfake videos and Deepfake audio that could collapse our ability to tell the difference between fake and real, and you get the perfect storm of psychological, cultural, and political chaos.”

Do you think we are close to that chaos?

References:
Galston, W. A. (2020, May 6). Is seeing still believing? The deepfake challenge to truth in politics. Brookings. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/
Lee, T. (2019). The global rise of “fake news” and the threat to democratic elections in the USA. Public Administration and Policy, 22(1), 15–24. https://doi.org/10.1108/pap-04-2019-0008
Ongweso, E. (2019, September 5). Thieves Used Audio Deepfake of a CEO to Steal $243,000. VICE. https://www.vice.com/en_us/article/d3a7qa/thieves-used-audio-deep-fake-of-a-ceo-to-steal-dollar243000
Roughol, I. (2019, July 3). The real danger of deepfakes. LinkedIn. https://www.linkedin.com/pulse/real-danger-deepfakes-isabelle-roughol
Scott, G. (2019, September 8). DeepFake and the Future of Reality. Gray Scott. https://www.grayscott.com/futuristic-now//deepfake-and-the-future-of-reality
Wolf, M. (2018, December 20). Skim reading is the new normal. The effect on society is profound. The Guardian. https://www.theguardian.com/commentisfree/2018/aug/25/skim-reading-new-normal-maryanne-wolf

Please rate this

2 thoughts on “Is Seeing still Believing? The Danger of Deepfakes”

  1. What an interesting article, also with the addition of a video!! Indeed, fake news travel faster as they capitalize on fancier words that trigger emotions and make people act faster and sometime impulsively. The question for me is how do we make people aware of such problems? When the elderly generations can barely use a mobile phone or a laptop or even when they do they are certainly not critical about what they see. As I see it, it’s important to have multiple sources to get the information from, a person cannot rely on just one website, article or page. The question is how do you stop these people from taking such actions or how do you differentiate the fake from the true ones. I think sometimes it comes down to individual action and by flagging some content as inappropriate. Human checks are also part of the solution until some trustworthy technologies can be developed. It’s definitely something to worry about and you are right chaos could be the scenario if people do not act and respond to such threats.

    1. Thanks for responding!

      Good to hear that you enjoyed the article! And yes, it is highly needed that at least everybody would be informed about the possibility of being fooled on a high level like this. Maybe sharing the blog could make a nice start. 😉 But on a serious note: indeed taking action yourself and giving word on it when you come across something like this is very important, cause that is something we can do ourselves. Apart from that, people are working hard on deepfake detection algorithms that are able to recognize deepfakes (Vincent, 2019). But the problem is, way more people are working the technique of deepfakes itself, than there are people working on possible solutions against the misuse if the technique. Galston (2020) even mentions that it is roughly 1 against a 100. Some people are trying to change this and Facebook for example spent recently $10 million to create the Deepfake Detection Challenge where participants developed new technology that could detect deepfake videos (Uthappa, 2020). But I question if it will ever be possible to have better detection technology than technology to fake it. Think about photoshopping… You can’t tell. Some are thinking about adding something to the metadata of a video that proves that it is real in combination with blockchain, which really couldn’t be changed. I’m not sure how it would work, but such kind of solution could have potency. But time will tell what deepfakes will bring us and I’m definitely keeping a close eye on things regarding the upcoming elections in America.

      Galston, W. A. (2020, May 6). Is seeing still believing? The deepfake challenge to truth in politics. Brookings. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/

      Uthappa, A. (2020, May 18). Deepfake videos are everywhere. So how do we know what’s real? ESHA IT. https://eshacorpit.com/deep-fake-videos/

      Vincent, J. (2019, June 27). Deepfake detection algorithms will never be enough. The Verge. https://www.theverge.com/2019/6/27/18715235/deepfake-detection-ai-algorithms-accuracy-will-they-ever-work

Leave a Reply

Your email address will not be published. Required fields are marked *