Let’s dive into a topic that’s been in the center of attention recently: deepfakes. If you’re unfamiliar with the term: deepfakes allow you to change or create video content using AI. You can make it look like someone is saying something that they in reality have never said through tools like DeepFaceLab and Faceswap.
There are genuinely cool aspects of deepfakes. Think about the film industry: actors could be placed in roles or scenes they never actually filmed. This could be very useful for dangerous stunts, logistical difficulties, or when actors have passed away. Or for education, imagine a ‘live’ presentation from historical figures, bringing history lessons to life. And on the practical side, deepfakes could make it look like you’re speaking a foreign language fluently in video calls, making global communication smoother.
However, there’s a flip side. Picture this: a video surfaces of a well-known figure, like former President Obama, giving a speech. But something feels off because the content isn’t something he would typically say. At the bottom of this page, I added a video that was created by Buzzfeed in 2018. This altered video, a deepfake, could easily spread misleading information. The potential for misuse, especially in the age of social media, is high. Not only could this disrupt the news cycle, but on a personal level, anyone’s image could be used without permission to create false scenarios.
So, what’s being done about this? Tech experts are on the case, working on tools to detect and flag deepfake content. There’s also a growing conversation around creating regulations to ensure responsible use of this technology.
Ultimately, deepfakes represent a fascinating blend of innovation and challenge just like AI in general. As we navigate this digital era, it’s essential to approach such advancements with both enthusiasm and caution. Always be critical of what you see online and be cautious with what you share!
Hi Laurens,
Interesting topic about deepfakes!
I am curious about the upcoming legislation around this topic. Especially whether your face is your own right. I know that in the past months this has been a big deal in the movie production industry where actors were striking. They want to have clear regulations about using this deepfakes in movies. Same problem as with the Obama video, it could lead to all kinds of misuse in movies actors do not want to play in or have no control over the scenes. I know that in the United States, they are working on the Clarke’s act. The legislation would require creators to label all deepfakes uploaded to online platforms and make transparent any alterations made to a video or other type of content. It think the idea is good but not sure about the implications. However, most platforms are also making content creators label deepfakes. I wonder how they are going to check it. For now it is still easy to spot most deepfakes, however some of them are already really good. In the future it is going to be so difficult to know what is real or not and to verify that.