Artificial Intelligence has present many opportunities. However, AI tools also can have disturbing consequences. In this blog, I will elaborate on this, using the example of deep fake. Deep fake includes the use of deep learning in order to create an image or a video of fake occurrences (Sample, 2020). The tool is continuously developing and is getting better and better. In the first stages of the development of the tool, it was clear that the videos were fake. However, since the technology is improving, soon (or already?) it will be impossible to recognize whether it is real or fake.
These deep fake videos create many opportunities. For example, using deep fake, a short video of Anne Frank is created in order to create a deeper understanding of the Holocaust (Ribbens, 2021). However, while this use is for educational purposes, Sample (2020) explains that most of these deep fake videos are pornographic in which celebrities are the main character.
Another example of deep fake is the following video of “Barack Obama”: https://www.youtube.com/watch?v=cQ54GDm1eL0&ab_channel=BuzzFeedVideo .
As you can imagine, while this video is “funny”, it addresses the possibility that “Obama” can “say” disturbing things. And since this tool is accessible for everyone, the question rises: “How can we ensure that AI tools are used properly?”
In my opinion, the most important is to create awareness and educate people how they can distinguish a real video from a fake. Making the tools less accessible would be another solution, but the implementation of this would be very hard. Another solution would be to have every video controlled by a team. However, the whole purpose of creating a deep fake in a few minutes would not be possible anymore. Another possible solution could be to rise the prices. While this would make it less attractive for some people, I think it mostly would discourage the people who would use it for positive objectives.
In short, while funny and creative deep fake videos take over the entire internet, it is an AI-driven tool that can also be used for disturbing purposes. With fake pornographic videos made, and influential people in the deep fake videos, it is clear that not everyone is using the tool for educational or entertainment goals. However, isn’t there a downside to all the AI tools?
Bibliography
Ribbens, K. (2021, April 8). Is DeepFake the Future of Holocaust Memory? – The British and Irish Association for Holocaust Studies. https://biahs.co.uk/2021/04/08/is-deepfake-the-future-of-holocaust-memory/
Sample, I. (2020, January 13). What are deepfakes – and how can you spot them? The Guardian. https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them
Hi Joyce, I really like your post and relates to the topic I wrote about as well. I agree that raising awareness is very important in tackling the escalating threat of deep fake technology, but I do question if it’s the ‘most’ important. Personally I think we need a comprehensive approach, including rules, regulations, and responsible use of AI. It’s important for policymakers, tech developers, and society to work together and help people understand digital technology better. As you acknowledged, limiting access and increasing prices are proposed solutions, but poses challenges. To add on to this, I believe that restricting access might lead to a chase between regulators and users, where higher prices could create a market only for rich people, making inequality potentially worse. All in all, thanks for your informative and fun post!
Hi Joyce, thank you for this insightful blog. I like how you emphasizes that while AI-driven deep fake videos can be entertaining and creative, they also pose risks. It reminds us to consider the ethical implications of AI tools, which can simultaneously offer opportunities and pitfalls in our rapidly changing digital landscape. Don’t you think? Thanks again and see you in class.
Emma