Will AI-Powered Deepfakes be the Future of Education and Content Creation?

23

October

2023

5/5 (1)

In the field of artificial intelligence, there is a noteworthy area of research that centers around ethical and moral considerations in various domains, and one prominent example is the concept of “deepfakes.” Deepfakes have opened up a new dimension within artificial intelligence, where they can create metahuman or AI avatars capable of mimicking human actions, speech, and gestures.

But what if we harnessed deepfake technology to instantly enhance common educational practices, such as creating presentations? What would that look like? I recently had the opportunity to explore a generative AI web-based application called “Deep Brain AI,” which expands the horizons of AI capabilities, particularly in the realm of content creation. What does this mean in practical terms? Users can develop PowerPoint presentations, just like they always have, to convey information to an audience. However, the intriguing twist is that a full body animated AI avatars or metahumans can replace human speakers. Consequently, the presenter doesn’t need to speak, as the AI avatar or metahuman can handle the task.

The web-based application allows you to create templates, insert text boxes, and upload videos and audio, just like a standard PowerPoint application. The real innovation emerges when you can create an AI avatar, either male or female, with the ability to speak in various languages and accents from different countries. For instance, you can choose between accents like U.S. English, Indian English, Saudi Arabian Arabic, Taiwanese Chinese, and German from Germany. The AI avatar can articulate the content through a text script, effectively enabling text-to-speech input.

The application offers a range of features, including control over scene speed and the ability to insert additional pauses. What’s even more fascinating is the incorporation of advanced generative AI technologies, such as ChatGPT, into the application. I found this particularly intriguing, as it recognizes the utility of ChatGPT and integrates it seamlessly into the platform.

However, there were some shortcomings when using the application, most notably the unnatural quality of the deepfake avatars. They were easily discernible as artificial, which could lead to dissatisfaction among users and their audiences when listening to the AI avatars or reading the presentations.

Nonetheless, the age of artificial intelligence is advancing at an unprecedented pace, and my overall experience with the application has been positive. I’m keen to hear about your experiences with Deep Brain AI or deepfake technology in general.

Please rate this

Hands-On Exploration of Deepfake Detection and Generation

9

October

2023

No ratings yet.

Generative AI has proven to be very effective in supporting a person’s everyday life but also in professional life by making vast amounts of knowledge easily digestible and summarized, e.g. with ChatGPT. Besides the known grave dangers like algorithmic bias, lack of transparency and accountability, Deepfakes have been addressed less lately even though there have been very realistic ones like with Morgan Freeman that created a large commotion.

A lot of research has focused on the detection of Deepfakes, with a few websites freely accessible online. Testing out the Deepware detector with a (from human eye perspective) worse deepfake and the well-known Morgan Freeman, the difference in quality is immediately visible:

The deepfake with Elon Musk (Deepware, 2023) is recognized with a high confidence with just one model, while for the deepfake of Morgan Freeman (Deepware, 2023) 5 models are used for detection with only one recognizing it as a deepfake.

Through the evolution of easily accessible Generative Adversarial Networks (GANs) deepfakes often are created with the intention to deceive or to spread misinformation and possibly conduct financial frauds. Their largest influence lies in social media as audio-visual content is most easily spread on these platforms. This could lead to an “infocalypse” (Westerlund, 2019), which would mean that people could only trust their own close social network including friends and family that reinforce the already existing beliefs. Deepfakes that align with the own view even though fake would seem more realistic.

Deepfake videos in particular can be categorized according to into the types i) face-swap, ii) lip-synching, iii) puppet-master, iv) face synthesis and v) audio-only (Masood, 2022) .

In this experiment, I tried to create a deepfake of Donald Trump stating something that he usually would not do. For this, first an audio-only deepfake is created and then using lip-synching matched with an image to create a video.

First, using ChatGPT I phrased a statement that would be unusual for Donald Trump to say.

Afterwards, I explored many different platforms and options to create Donald Trumps voice. One option was to train a TTS (Text-to-speech) voice cloner (Vocloner, 2023) with some audio data found on Kaggle.

Result:

The result was not very convincing, which is why other tools were explored. Fineshare, also a free voice changer website resulted in a poor audio as well. Speechify which is a common tool required a premium subscription for cloning voices that are not recorded by oneself.

Lastly, I used the website FakeYou and found a TTS model pre-trained (FakeYou, 2023) to generate the audio, which resulted in a much better audio. This shows that a more sophisticated and pre-trained model allows for better Deepfakes. With the lip sync function (FakeYou, 2023) an image and the audio of Donald Trump is merged to create a Deepfake.

This free Deepfake detector (Deepware, 2023) did not manage to flag this video as a Deepfake. This shows that detection is still a large problem, although Deepfake detection ideally should be integrated into any social media content post. Only this way I can see how social media is prevented from being flooded with Deepfakes. Having at least a system in place that can show the confidence with which a video is or is not identified as a Deepfake would help a lot, so that users are more aware of the informational value of the content. Social media platforms in context of AI Ethics must bear responsibility to protect the users from misinformation.

Finally, even though Deepfakes are mostly associated negatively, there are many constructive uses. One could be creating personalized and engaging content for Marketing purposes, depending on the user’s data and preferences different influencers can be used to endorse a product specifically for the viewer. It will be easier to create advertising campaigns without the influencers being present and ready. This even has the potential to disrupt how Marketing is done, as this is much more flexible, user-oriented, and possibly cheaper.

Where do you see opportunities for Deepfake? What do you think about its potential impact on social media and what do you think needs to happen to prevent malicious spread and use of Deepfakes?

References

Deepware. (2023). Retrieved from https://scanner.deepware.ai/result/edd37447f779cfc58f48b545f663184d3f6f21ef-1589885916/

Deepware. (2023). Retrieved from https://scanner.deepware.ai/result/2a6115c760da36ee44d757d9105eb1ba4fd66b9f-1629834054/

Deepware. (2023). Retrieved from https://scanner.deepware.ai/result/c53071331497a1da41e8a2b30506342a476c666d-1696428314/?

FakeYou. (2023). Retrieved from https://fakeyou.com/tts/TM:03690khwpsbz

FakeYou. (2023). Retrieved from https://fakeyou.com/face-animation

Masood, M. (2022). Retrieved from https://link.springer.com/article/10.1007/s10489-022-03766-z

Vocloner. (2023). Retrieved from https://vocloner.com/

Westerlund, M. (2019). Retrieved from https://timreview.ca/sites/default/files/article_PDF/TIMReview_November2019%20-%20D%20-%20Final.pdf

Please rate this

Have we reached the era of immortal movie stars? Bruce Willis becomes first celebrity to sell image rights to deepfake firm.

30

September

2022

5/5 (2)

The technology allowed the actor to return to the screen without ever being on set

Willis’ digital twin used in an advert for a russian company

According to The Telegraph (2022), Bruce Willis has just this week sold his image rights to the US firm ‘Deepcake’, allowing the creation of a “digital twin”. Deepcake specializes in the use of deepfake, which consists of superimposing a person’s likeness over another individual (Hellyer, 2022).

Bruce Willis had his first experience with deepfake technology last year, when he allowed for his “twin” to be used in a commercial for a Russian phone service, MegaFon.

In a statement, Willis said: “I liked the precision with which my character turned out. It’s a mini-movie in my usual action-comedy genre. For me, it is a great opportunity to go back in time. With the advent of modern technology, even when I was on another continent, I was able to communicate, work and participate in the filming. It’s a very new and interesting experience, and I thank our entire team.”

Now, the actor has officially sold the rights of his digital doppelganger to be hired by ‘Deepcake’ for future projects.

Some ethical questions about deepfake technology have been the topic of discussion. The ability to recreate someone so nearly-perfectly can cause some worries, for example, it’s the perfect tool for spreading political disinformation. For Hollywood however, it opens up the possibility of actors starring in movies after they die and of stars from the past being brought to life on screen.

Willis, who was diagnosed with Aphasia and announced earlier this year that he would be stepping away from acting as a result from the disease, may be the first of many celebrities willing to have their legacies live on.

You can see deepfake technology in action and watch the behind-the-scenes video for Bruce Willis commercial below:

https://www.youtube.com/watch?v=Ca75gKxfdPQ

References

Allen, N. (2022). Deepfake tech allows Bruce Willis to return to the screen without ever being on set. The Telegraph. [online] 28 Sep. Available at: https://www.telegraph.co.uk/world-news/2022/09/28/deepfake-tech-allows-bruce-willis-return-screen-without-ever/ [Accessed 30 Sep. 2022].

‌Hellyer, F. (2022). Deepfakes: The New Ticket to Immortality? [online] Rolling Stone. Available at: https://www.rollingstone.com/culture-council/articles/the-new-ticket-to-immortality-1324513/ [Accessed 30 Sep. 2022].

Vincent, J. (2021). Everyone will be able to clone their voice in the future. [online] The Verge. Available at: https://www.theverge.com/22672123/ai-voice-clone-synthesis-deepfake-applications-vergecast [Accessed 30 Sep. 2022].

Please rate this

Deepfake Fraud – The Other Side of Artificial Intelligence

8

October

2021

Dangers of AI: How deepfakes through Artificial Intelligence could be used for fraud, scams and cybercrime.

No ratings yet.

Together with Machine Learning, Artificial Intelligence (or: AI) can be considered one of if not the hottest emerging innovations in the field of technology nowadays (Duggal, 2021). AI entails the ability of a computer or a machine to ‘think by itself’, as it strives to mimic human intelligence instead of simply executing actions it was programmed to carry out. By using algorithms and historical data, AI utilizes Machine Learning in order to comprehend patterns and how to respond to certain actions, thus creating ‘a mind of its own’ (Andersen, n.d.). 

History

Even though the initial days of Artificial Intelligence research date back to the late 1950s, the technology has just recently been introduced to the general mass on a wider scale. The science behind the technology is complex, however AI is becoming more widely known and used on a day-to-day basis. This is due to the fact that computers have become much faster and data (for the AI to derive from) has become more accessible (Kaplan & Haenlein, 2020). This allows for AI to be more effective, to the point where it has already been implemented in every-day devices i.e. our smartphones. Do you use speech or facial recognition for unlocking your phone? Do you use Siri, Alexa or Google Assistant? Ever felt like advertisements on social media resonate a bit too much with your actual interests? Whether you believe it or not, it is highly likely that both you and I come into contact with AI on a daily basis.

AI in a nutshell: How it connects to Machine/Deep Learning

That’s good… right?

Although the possibilities for positively exploiting AI seem endless, one of the more recent events which shocked the world about the dangers of AI is a phenomenon called ‘deepfaking’. This is where AI utilizes a Deep Learning algorithm to replace a person from a photo/video with someone else, creating seemingly (!) authentic and real visuals of that person. As one can imagine, this results in situations where people seem to be doing things through media, which in reality they have not. Although people fear the usage of this deepfake technology against celebrities or high-status individuals, this can – and actually does – happen to regular people, possibly you and I.

Cybercrime

Just last month, scammers from all over the world are reported to have been creatively using this cybercrime ‘technique’ in order to commit fraud against, scam or blackmail ordinary people (Pashaeva, 2021). From posing as a wealthy bank owner to extract money from investors, to blackmailing people with videos of them seemingly engaging in a sexual act… as mentioned before, the possibilities for exploiting AI seem endless. Deepfakes are just another perfect illustration of this fact. I simply hope that, in time, the positives of AI outweigh the negatives. I would love to hear your perspective on this matter.

Discussion: Deepfake singularity

For example, would you believe this was actually Morgan Freeman if you did not know about Artificial Intelligence and deepfakes? What do you think this technology could cause in the long term, when the AI develops itself into a much more believable state? Will we be able to always spot the fakes? What do you think this could lead to in terms of possible scamming or blackmailing, if e.g. Morgan Freeman were to say other things…?

References

Duggal, N. (2021). Top 9 New Technology Trends for 2021. Available at: https://www.simplilearn.com/top-technology-trends-and-jobs-article

Andersen, I. (n.d.). What Is AI and How Does It Work? Available at: https://www.revlocal.com/resources/library/blog/what-is-ai-and-how-does-it-work

Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1). https://doi.org/10.1016/j.bushor.2019.09.003

Pashaeva, Y. (2021). Scammers Are Using Deepfake Videos Now. Available at: https://slate.com/technology/2021/09/deepfake-video-scams.html

Please rate this

Author: Roël van der Valk

MSc Business Information Management student at RSM Erasmus University - Student number: 483426 TA BM01BIM Information Strategy 2022

The arms race in Deepfakes

5

October

2021

No ratings yet.

In the new Starwars series The Mandalorian, there is an iconic scene with Mark Hamill where he appears a young 30 even though he is 70 years of age. Disney used Digital techniques to de-age Mark Hamill to appear as he did during the first Starwars films some 40 years ago. Now, what does this have to do with deepfakes? Lucas film used an entire team of VFX producers to de-age mark’s character and the result was a wax doll looking version of the character. In just four days one person using deepfake technology was able to create a better-looking scene and was consequently hired by Lucas film. In the video below you can see for yourself the power of this technology.

As you just saw deepfake technology gives filmmakers tremendous opportunities to do what only was reserved for large film studio’s. This gives filmmakers around the world the possibility to make high-end visual effects without the need for large studio technologies or backing.

However, there are downsides to this technology as well. Even though this is a golden gift for the film industry malicious parties could use this technology to slander and/or discredit public figures, and as deepfakes use machine learning they keep improving themselves and will become indistinguishable from reality. The following video is from 2018 and since then the quality of deepfakes has only improved.

Now, what can we do to battle against this ever-increasing threat of mis- and dis-information. In 2019 a paper was released called Faceforensics++ in which the researchers used machine learning to create a deepfake detector by using four deepfake generators run over their data set of a thousand video’s. In this way, the deepfake detector got trained to detect several kinds of deepfakes and in high resolution, it could detect deepfakes with an accuracy over 99%, however, in low-resolution video’s it would go down to 51,80%. (Rössler, et al., 2019) This reveals a weakness of the technology as online videos are compressed and changed numerously. Additionally, the detection software allows deepfake machine learning to keep improving itself based on where faults were found. A true arms race between deep fake generation and detection. As one improves the other will inevitably improve along with it. The only way to protect ourselves against such technology is proactively developing deepfakes and their detectors, in order to stay ahead of those who would use them to do harm.

Refernce : Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J. and Nießner, M., 2019. FaceForensics++: Learning to Detect Manipulated Facial Images. arXiv (2019).

Please rate this

5/5 (2) The Threat of Deepfakes

12

October

2019

Please rate this

Last summer an app called DeepNude caused a lot of controversy in the (social) media. Deepnude was an AI based piece of software with the ability to create a very realistic nude pictures of any uploaded face in the app. Mass criticism followed, the app’s servers got overloaded by curious people and not much later, the app went offline permanently. Deepnude stated on twitter that the probability is misuse was too high and that the world “was not ready yet”. The app never came back online ever since  (Deepnude Twitter, 2019). It shows that deepfake technology is becoming available to the public sooner than we thought, including all potential risks.

A definition for DeepFake is “AI-based technology used to produce or alter video content so that it presents something that didn’t, in fact, occur” (Rouse, 2019). As deepfake is AI-based technology it is able to improve over time, as the amount of data input increases and the technology learns to how to create better output. In my opinion deepfake has an amazing potential in the entertainment industry, but there is a serious risk when the technology gets misused. The AI technology makes it harder and harder for humans to distinguish real videos from fake ones. Deepfake videos of world-leaders like Trump and Putin are already to be found on the internet. Also deepfake porn videos of celebrities are being discovered once in a while.

With the upcoming presidential elections of 2020 in the United States, politicians and and many others are seeking solutions to prevent a similar scenario like the 2017 elections. The 2017 presidential elections were characterized by the spread of fake news and the ongoing allegations resulting from it. These events very likely influenced the outcome of those elections (CNN, 2019). Recently the state of California passed a law which “criminalizes the creation and distribution of video content (as well as still images and audio) that are faked to pass off as genuine footage of politicians. (Winder, 2019).” In 2020 we’ll find out whether deepfakes have been restricted succesfully.

I hope developers and users of deepfake technology will become aware of the huge threats of deepfake, and will use it in a responsible way. It is also important for society to stay critical at their news sources and that they prevent supporting these types of technology misuse. According to Wired (Knight, 2019), Google has released thousands of deepfake videos to be used as AI input to detect other deepfake videos. Another company called Deeptrace is using deep learning and AI in order to detect and monitor deepfake videos (Deeptrace, sd).

See you in 2020…

References

CNN. (2019). 2016 Presidential Election Investigation Fast Facts. Retrieved from CNN: https://edition.cnn.com/2017/10/12/us/2016-presidential-election-investigation-fast-facts/index.html

Deepnude Twitter. (2019). deepnudeapp Twitter. Retrieved from Twitter: https://twitter.com/deepnudeapp

Deeptrace. (n.d.). About Deeptrace. Retrieved from Deeptrace: https://deeptracelabs.com/about/

Knight, W. (2019). Even the AI Behind Deepfakes Can’t Save Us From Being Duped. Retrieved from Wired: https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/

Rouse, M. (2019). What is deepfake (deep fake AI). Retrieved from TechTarget: https://whatis.techtarget.com/definition/deepfake

Winder, D. (2019). Forget Fake News, Deepfake Videos Are Really All About Non-Consensual Porn. Retrieved from Forbes: https://www.forbes.com/sites/daveywinder/2019/10/08/forget-2020-election-fake-news-deepfake-videos-are-all-about-the-porn/#26a929963f99

 

 

The danger of technology development – Deepfakes

11

September

2019

No ratings yet. Nowadays, technology trends are everywhere. While the development of new and innovative technologies can make life easier and more convenient, also the dangerous sides of technology development may negatively impact people in extreme ways. One example is the rise of Deepfake technology. This AI-type of technology is used to manipulate reality, in the form of images and videos of real people that say or do things that they have never said or done. In recent years, many famous people like Obama, Mark Zuckerberg and Kit Harrington have appeared in Deepfake videos. The advancement of machine learning techniques helps editors to make the deepfake content even more realistic, thereby making them difficult to distinguish from reality. Multiple professors and the Dutch Public Prosecution Services are getting more and more concerned about the developments of Deepfake technology, which may escalate to dangerous levels. But why is it so dangerous for certain people and maybe even the whole society?

At first, the advantages of Deepfake technology may not be neglected. For example, Deepfake technology can be used for educational purposes, by providing information to students in more innovative ways. Other benefits may be more psychologically based, in the sense that Deepfake technology may help people with certain disabilities to experience pornographic or video-game related matters in a better and more autonomous way. However, the enormous risks that Deepfake technology may bring, often overrule its possible benefits.

Namely, the manipulation of photos and videos of famous and/or influential people may be negatively used to spread misinformation or damage these people’s reputation. This may have impact not only in the business world (where CEOs may be depicted in a negative light) but also in the political and societal environment (in which influential politicians may say or do things against democracy). Besides the advancements in AI and machine learning technologies, the tools and edited videos can be accessed and distributed more easily nowadays. And what’s even the most terrifying is that deepfake content can’t be prevented from being created, since the technologies and access around it keep on evolving. This future problem doesn’t have a clear solution yet, however researchers propose that tracking systems need to be build, that are able to distinguish deepfake videos from real ones. Yet, this will be a tough task.

Sources:
Chesney, Robert and Citron, Danielle Keats, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (July 14, 2018). 107 California Law Review (2019, Forthcoming); U of Texas Law, Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21. Available at SSRN: https://ssrn.com/abstract=3213954 or http://dx.doi.org/10.2139/ssrn.3213954

Eadicicco, L. (2019). There’s a terrifying trend on the internet that could be used to ruin your reputation, and no one knows how to stop it. [online] Business Insider Nederland. Available at: https://www.businessinsider.nl/dangerous-deepfake-technology-spreading-cannot-be-stopped-2019-7?international=true&r=US [Accessed 11 Sep. 2019].

Nu.nl. (2019). Openbaar Ministerie uit zorgen over mogelijke afpersing via deepfakes | NU – Het laatste nieuws het eerst op NU.nl. [online] Available at: https://www.nu.nl/tech/5989399/openbaar-ministerie-uit-zorgen-over-mogelijke-afpersing-via-deepfakes.html [Accessed 11 Sep. 2019].

Please rate this

The Terrifying Rise of Deep-Fake Content

17

October

2018

No ratings yet. Earlier this year the famous actress Drew Barrymore had to deal with some bizarre fabrications of her life that had been published in the magazine of Egyptair: “HORUS”. Celebrities often have to deal with stories about their life that are based on half truths or even lies, but the lengths to which the interviewer went are pretty scary. They accurately photoshopped an original photo of Barrymore holding the magazine “Nisf Al-Donia” and swapped it with their own magazine and unashamedly published it in Egyptair’s magazine. The only reason why this has been discovered is because of the content, Barrymore’s forged answers, did not reflect Barrymore’s life at all. But what if the interviewer had been a bit more clever. Than this magazine article had been gone unnoticed, and had been passed on to passengers without any remarks.

This brings me to the topic I want to address in this blog post. Something that I have been worrying about for quite some time now, that is deep-fake content. Deep-fake learning is an AI based human image synthesis technique used to combine existing with fake images, audio or video into source material. Creating fake content that looks like reality. Deep-fake is mostly used to create fake celebrity or revenge pornography [1]. Of course there are also less harmful use-cases of deep-fake content. For instance to create comedy sketches that are called derpfakes (see below).

From a technological point of view, deep-fake techniques are a marvel of engineering. Pushing the boundaries of what can be done with graphical processors and algorithms. However, deep-fake technologies are sadly enough mostly used in either pornography or even worse, revenge pornography [2]. The latter is really important because celebrities are often well protected due to their popularity but regular people like you and me will find themselves in a much trickier situation. In the UK creating harmful deep-fake material is considered a crime but in other EU member states this is not the case. Recently the ministry of Defence of the United States developed a tool that is designed to catch deep-fakes [3]. But governments are still hesitant on policy making to make deep-fake crimes are specific type of crime. The public is not enough aware of the rising technological possibilities of deep-fakes and thus governments do not make it an priority either. With this blog post I have hoped to give you some insights into this topic and make you aware of the dangers, and convince you that this should be explicitly be made punishable.

For females out there reading this post, please be careful with what you post on social media and how accessible your content is. Numerous studies have shown that females are more often victims of deep-fake content than men.

1 : “What Are Deepfakes & Why the Future of Porn is Terrifying”. Highsnobiety. 2018-02-20. Retrieved 2018-02-20.
2 : https://tweakers.net/nieuws/134449/vervangen-van-gezicht-in-pornovideos-met-ai-neemt-grote-vlucht-door-tool.html
3 : https://www.technologyreview.com/s/611726/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes/

Please rate this