Should ‘deepfake’ AI be free?

17

October

2023

No ratings yet.

Recently the Cyber Defence Agency of the United States posted an alert regarding the threat of synthetic media, such as deepfakes. They warned us for the exponential increase in deepfake videos and the technology that comes with it (NSA, FBI, and CISA release Cybersecurity Information Sheet on Deepfake threats | CISA, 2023). Also, the BBC had an article related to the dangers of deepfake. They discussed the scam videos that were made via deepfake about famous youtuber MrBeast. These videos were targeting fans showing them how they could win an iPhone by providing private information (Gerken, 2023).

Reading this made me interested in deepfake and I wanted to find out how difficult it is to make such videos using this technology.

Deepfake

However, it is first important to understand what deepfake is. According to MIT, the term deepfake refers to synthetic media where the person in a video or an image is replaced with another person’s likeness (Deepfakes, explained | MIT Sloan, 2020).

Basically, Deepfake technology refers to an artificial intelligence method that modifies or creates videos, audio, and images of individuals. Fundamentally, it employs machine learning algorithms to scrutinize, replicate, and mimic a person’s voice or facial expressions found in source media like videos or images (Clark, 2023). Recently for example, Mark Zuckerberg could be seen in a video talking about how great it is to have information about millions of people:

‘Imagine this…’ (2019) This deepfake moving image work is from the ‘Big Dada’ series, part of the ‘Spectre’ project. Where big data, AI… | Instagram

Own experience

So, I tried to use D-ID a website where you can make videos using free deepfake technology. If you pay for the premium version, you can make longer videos and use better technology. However, for this test and to look for easy solutions I wanted to use the free version and see how accurate this is.

First, I typed in my own name on Google to see which pictures I could find of myself on the web. It showed me an old twitter (X) photo and my current LinkedIn profile picture:

To blend in as a ‘scammer’, I made a screenshot of my own LinkedIn profile and used this as input for D-ID. This is the result:

It scares me how I look and also should you, but the most frightening is, is that everyone could basically do this with the free information that Google provides. Moreover, if this is already for ‘free’, how would advanced deepfake look with my own picture?

Luckily the technology is not ‘that advanced yet’. Seeing the examples of myself, Mark Zuckerberg, and MrBeast multiple times it is actually quite easy for me to conclude that this is, of course, not real. However, for some target groups (e.g., minors) this could look realistic which brings up the ethical part of this technology.

Also, the Cyber Defence Agency of the United States is already preparing to warn and help organisations in advance for this kind of technology. They collaborated with the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) and came up with a document regarding the threats deepfake poses to organisations, you can read it here if you would like to:

https://www.cisa.gov/news-events/alerts/2023/09/12/nsa-fbi-and-cisa-release-cybersecurity-information-sheet-deepfake-threats

Of course, there will be examples of this technology helping the society we live in, however there are some serious downsides that we need to address as well.

Should this technology be free for everyone to use, like D-ID? How far can we go with this type of technology?

Sources used:

Gerken, B. T. (2023, 4 oktober). MrBeast and BBC stars used in deepfake scam videos. BBC News. https://www.bbc.com/news/technology-66993651

NSA, FBI, and CISA release Cybersecurity Information Sheet on Deepfake threats | CISA. (2023, 12 september). Cybersecurity and Infrastructure Security Agency CISA. https://www.cisa.gov/news-events/alerts/2023/09/12/nsa-fbi-and-cisa-release-cybersecurity-information-sheet-deepfake-threats

Deepfakes, explained | MIT Sloan. (2020, 21 juli). MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained

Clark, M. (2023). Understanding deepfake technology: how it works and concerns arising from its implementation. pctechmag.com. https://pctechmag.com/2023/05/understanding-deepfake-technology/

Please rate this

2 thoughts on “Should ‘deepfake’ AI be free?”

  1. Interesting blog! I do see the dangers of deepfakes and the potential to use this technology to commit fraud or spread fake news, for example. So then, to answer your question, I started looking at the positive use of deepfake. I actually couldn´t really figure it out myself, so I did some desktop research. This included using deepfake technologies for voiceovers, educating people in a more interactive way, such as by bringing historical figures to life and protecting someone’s privacy by having someone else speak the message.
    In all these examples, turning a random Linkedin photo into a deepfake video is completely unnecessary . I only see downsides of the usage of Deepfake in this case. So, I do think that this technology should not be free for everyone. In fact, I do think that we have to look at the possibilites to establish rules for the use of this technique in order to pretend fraud, fake news and other misbehavior.

  2. Very interesting and topical read! It is fascinating how even with the most primitive free tools, the possibilities eerily resemble life. So indeed, with more advanced tools and techniques, deepfakes become an even more concerning issues, capable of projecting serious security security and deeply manipulating public perception, especially amongst more influencable crowds, as you mentioned. Makes it hard to believe anything is authentic, really. But then again, it is probably impossible to ban deepfakes – it is essentially the same technology that Snapchat’s filters use. The landscape certainly could be more regulated to prevent identity theft and crowd manipulation though.

    It would be interesting to explore how development in one area almost simultaneously creates a certain technological arms race. Companies and organisations now are developing AI tools to detect and combat deepfakes – one side pushes the boundaries while the other strives to safeguard the information integrity.

Leave a Reply

Your email address will not be published. Required fields are marked *