The danger of technology development – Deepfakes

11

September

2019

No ratings yet.

Nowadays, technology trends are everywhere. While the development of new and innovative technologies can make life easier and more convenient, also the dangerous sides of technology development may negatively impact people in extreme ways. One example is the rise of Deepfake technology. This AI-type of technology is used to manipulate reality, in the form of images and videos of real people that say or do things that they have never said or done. In recent years, many famous people like Obama, Mark Zuckerberg and Kit Harrington have appeared in Deepfake videos. The advancement of machine learning techniques helps editors to make the deepfake content even more realistic, thereby making them difficult to distinguish from reality. Multiple professors and the Dutch Public Prosecution Services are getting more and more concerned about the developments of Deepfake technology, which may escalate to dangerous levels. But why is it so dangerous for certain people and maybe even the whole society?

At first, the advantages of Deepfake technology may not be neglected. For example, Deepfake technology can be used for educational purposes, by providing information to students in more innovative ways. Other benefits may be more psychologically based, in the sense that Deepfake technology may help people with certain disabilities to experience pornographic or video-game related matters in a better and more autonomous way. However, the enormous risks that Deepfake technology may bring, often overrule its possible benefits.

Namely, the manipulation of photos and videos of famous and/or influential people may be negatively used to spread misinformation or damage these people’s reputation. This may have impact not only in the business world (where CEOs may be depicted in a negative light) but also in the political and societal environment (in which influential politicians may say or do things against democracy). Besides the advancements in AI and machine learning technologies, the tools and edited videos can be accessed and distributed more easily nowadays. And what’s even the most terrifying is that deepfake content can’t be prevented from being created, since the technologies and access around it keep on evolving. This future problem doesn’t have a clear solution yet, however researchers propose that tracking systems need to be build, that are able to distinguish deepfake videos from real ones. Yet, this will be a tough task.

Sources:
Chesney, Robert and Citron, Danielle Keats, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (July 14, 2018). 107 California Law Review (2019, Forthcoming); U of Texas Law, Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21. Available at SSRN: https://ssrn.com/abstract=3213954 or http://dx.doi.org/10.2139/ssrn.3213954

Eadicicco, L. (2019). There’s a terrifying trend on the internet that could be used to ruin your reputation, and no one knows how to stop it. [online] Business Insider Nederland. Available at: https://www.businessinsider.nl/dangerous-deepfake-technology-spreading-cannot-be-stopped-2019-7?international=true&r=US [Accessed 11 Sep. 2019].

Nu.nl. (2019). Openbaar Ministerie uit zorgen over mogelijke afpersing via deepfakes | NU – Het laatste nieuws het eerst op NU.nl. [online] Available at: https://www.nu.nl/tech/5989399/openbaar-ministerie-uit-zorgen-over-mogelijke-afpersing-via-deepfakes.html [Accessed 11 Sep. 2019].

Please rate this