Is seeing believing?

14

October

2019

No ratings yet.

106173405-1570674137430gettyimages-1167464772

Over the past year, deepfakes have become so good at manipulating images, video and audio that they may soon become indistinguishable from reality. If this were to occur, the societal implications would be substantial. This is especially worrying with the recent rise of fake news outlets that have successfully managed to influence political discourse during national elections. Deepfakes could take fake news to the next level by actually supplying readers with a depiction of “proof” that would further cause a polarisation of society based on political views.

Keeping this in mind, it is important that we can find a way to counter deepfakes and expose manipulated images, video and audio to keep believing in what we see online. This blog post will explore three possible ideas on how deepfakes could be countered and exposed.

  1. We could trust in trained individuals to make value judgements about potential deepfake content. Sarah T. Roberts, an information scholar at UCLA suggests that these people could be trained in spotting for signs of manipulation and taking down questionable content from online platforms, helping to make a safer and more trustworthy internet.
  2. Stricter legal punishments on image, video and audio forgery could discourage some deepfake artists from creating them. This could potentially reduce the number of deepfakes that are created and distributed online.
  3. Developing machine learning algorithms for the purpose of analysing uploaded media and online platform content can help filter out suspicious content. According to John Villasenor, an engineering professor, deepfake videos may seem indistinguishable from real videos to the human eye, but contain very slight errors that only machines would be able to pick up on. If these algorithms could be perfected to the point that they can reliably be scaled out to online platforms, it would contribute to a more trustworthy internet.

Sources:

https://www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html

https://www.theverge.com/2019/6/10/18659432/deepfake-ai-fakes-tech-edit-video-by-typing-new-words

https://www.technologyreview.com/s/614343/the-worlds-top-deepfake-artist-wow-this-is-developing-more-rapidly-than-i-thought/

Please rate this

2 thoughts on “Is seeing believing?”

  1. Ilari, thanks for your post on this topic.

    I totally agree that deepfakes pose a tremendous threat to our society, especially to our democratic principles. Therefore, I found it interesting to read the three ideas on how deepfakes can be countered. In my opinion, the most effective one would be to develop machine learning algorithms that analyse the uploaded content for inconsistencies in order to spot deepfakes. The idea of training individuals to make judgments about possible deepfake content is in my opinion not feasible. First, as you mentioned yourself, it is very hard to spot deepfakes and I imagine it is only getting harder as these algorithms evolve. And as it becomes easier to develop deepfakes, there will be an increasing amount of deepfakes on the internet with which manual labor probably cannot cope anymore. While I like and support the idea of stricter punishments on image, audio and video forgery, I believe that it will only be effective to a certain extent. I imagine that people who are capable of creating deepfakes and who want to influence groups of people with them, will still find ways to circulate them without revealing themselves. In any case, I am very curious about the future of deepfakes and whether we will encounter them in any major political scandals in the future.

  2. Interesting topic! I’m curious specifically about point 1 you make, where you state that people can be trained to spot signs of manipulation. But isn’t that a bit contradictory to what you say earlier about deepfakes and how good they have become at accurately manipulating audio and visual? How are people going to be trained to spot signs of manipulation of it is so accurately ‘fake’? I get the impression that not even certain technologies could spot these forms of manipulation, let alone a newly trained human being. Curious to hear what you think though.

Leave a Reply

Your email address will not be published. Required fields are marked *