Artificial Intelligence might be one of the most used buzzwords in today’s arena of technology. From big missions such as helping us to fight climate change, or becoming our digital doctor, to smaller examples such as being that customer support chatbot sorting out your insurance. You hear it everywhere: AI will make the world a better place.
However, between all these amazing ideas, ideals and uses of AI, some dark downsides are emerging. Already in 2018 Buzzfeed showed the world in a funny way, how AI powered software produced a video in which (former) president Obama is giving a speech (Silverman, 2018). It might seem funny, but it actually shows how dangerous AI can be because the speech was fake and the software had created what is called a ‘Deepfake’.
The definition of deepfake is: “a video of a person in which their appearance has been digitally altered so that they look like somebody else.” (Oxford Advanced Learner’s Dictionary, sd). The previously described example might have been entertaining to some extent, but it shows the alarming potential AI deepfakes can have on our media and news. It does not stop there; while creating fake news affects the society in a broader sense, other applications can affect you and people in your environment directly. An example of such an application of AI was found by deepfake expert Henry Ajder, whom discovered an app that uses AI to add a persons face into an adult movie (Hao, 2021). The potential threat this poses to mainly women in our society is tremendous. Remember that speech of Obama? Did you notice it was fake? Now, imagine receiving news that a video is circulating online in which you are participating in adult movie and it is hard to tell whether it is fake or not. It would devastate you!
What makes this problem even more painful is that there are hardly any laws in place to take action. For example only in the UK revenge porn is illegal, no other country has laws forbids creating fake non-consensual sexual content (Mania, 2020). These applications of software is another example in with technology is advancing, while control and legislation is staying behind.
The message is clear; If governments do not step up their game and start creating laws and ways to at least maintain some control over these technologies, the negative impact on society and the lives of individual people can be horrendous.
Sources
Hao, K. (2021, 09 13). A horrifying new AI app swaps women into porn videos with a click. Retrieved from https://www.technologyreview.com/: https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/
Mania, K. (2020). The Legal Implications and Remedies Concerning Revenge Porn and Fake Porn: A Common Law Perspective. Sexuality & Culture(24), 2079–2097. doi:https://doi.org/10.1007/s12119-020-09738-0
Oxford Advanced Learner’s Dictionary. (n.d.). deepfake. Retrieved from https://www.oxfordlearnersdictionaries.com: https://www.oxfordlearnersdictionaries.com/definition/english/deepfake
Silverman, C. (2018, 04 17). How To Spot A Deepfake Like The Barack Obama–Jordan Peele Video. Retrieved from https://www.buzzfeed.com/: https://www.buzzfeed.com/craigsilverman/obama-jordan-peele-deepfake-video-debunk-buzzfeed?utm_term=.bxbj7Rqm7#.hsZ0VPeyV
Very interesting blog post!
This is an urging problem that, in my eyes, needs to get more attention. You mentioned the deepfake of Obama and gave some extra attention to this topic. I totally agree on the fact that governments need to be proactive and start to create regulations to prevent the down affects on Artificial Intelligence. In my opinion, governments have to work together to get this problem under control. The internet doesn’t stop at the border, neither does the abuse of AI. For example, the European Union should create a screening service which scans the internet for fake information. In addition, the people behind the spread of misinformation need to be traces by the authorities. On the one side, they need to get punished. On the other hand, this can scare other persons who want to do the same thing.
I know it is hard to detect internet abusers, but I think we need to invest more money in systems to detect them in a faster and more sufficient way.