The Deepfake Dilemma: When Technology Threatens Trust

29

September

2023

5/5 (1)

Imagine you’re receiving a video call from a family member and everything looks and sounds normal. But then, he asks you for money because otherwise, he will get in trouble. Something similar happened to a relative. His brother was video calling him on WeChat and asking for money. Everything looked and sounded normal; after all, it was his brother that he saw on the screen. However, he was reluctant because why would he suddenly ask for money? He didn’t transfer the money and ended the call. Afterwards, he called his brother on the phone and told him that he didn’t video call him at all. It turned out that the scammer used AI deepfake to impersonate the relative’s brother using the same voice and image. Luckily, my relative didn’t fall into the scammer’s trap as he was aware of this scam, however, some were unfortunate to get tricked. For example, a man in China transferred around $570,000 to a scammer using deepfake, thinking that he was helping a friend in a bidding project (Zhao, 2023).

AI deepfake has been on the rise as AI technology becomes more accessible. Because AI is developing rapidly, it is becoming increasingly challenging to spot a deep fake video and scammers use it to their advantage. But how does it actually work? Deepfakes take examples of audio or footage of someone and learning how to recreate their movements and voice accurately. All it takes is a few photos of the target’s face which can be taken from social media or a short video clip of less than 15 seconds to recreate a person’s voice (Chua, 2023). This raises ethical and privacy concerns as it can violate individuals’ privacy by creating fake videos and images that they did not have given permission for. Besides, what will the scammer do with your videos and images? What if they use them for other malicious purposes?

What’s interesting is that the owner of WeChat actually thinks that deepfakes could be good, emphasizing it as highly creative and groundbreaking technology. The owner gave a few examples of how deepfake can be applied in the present and in the future. For instance, deepfake can be used to let deceased actors appear in new movies or to generate voice-overs in different languages. Furthermore, deepfake can help patients affected by chronic illness. For example, deepfake allow people who has lost their voice to communicate through this technology (Hao, 2020). AI deepfake has the potential for positive applications, however, misuse of this technology for malicious intents is a significant concern. Although the owner of WeChat sees deepfake as technology that can be good, the question is how will they protect users from harm?

As technology continues to advance at a rapid pace, there is also a dilemma: it challenges our ability to separate truth from fiction while raising ethical and privacy concerns. In a world where a familiar face on a video call can no longer be taken at face value, you need to think twice and ask yourself if what you are seeing is real or fiction.

What are your opinions on AI deepfake? 

Sources:

Chua, N. (2023). Scammers use deepfakes to create voice recordings and videos to trick victims’ family, friends. https://www.straitstimes.com/singapore/scammers-use-deepfakes-to-create-voice-recordings-and-videos-of-victims-family-friends-to-trick-them

Entrepeneur. (2023). ‘We were sucked in’: How to protect yourself from deepfake phone scams. https://www.entrepreneur.com/science-technology/5-ways-to-spot-and-avoid-deepfake-phone-scams/453561

Hao, K. (2020). The owner of WeChat thinks deepfakes could actually be good. https://www.technologyreview.com/2020/07/28/1005692/china-tencent-wechat-ai-plan-says-deepfakes-good/#:~:text=The%20news%3A%20In%20a%20new,a%20highly%20creative%20and%20groundbreaking

Zhao, H. (2023). AI deepfakes are on the rise in China. https://radii.co/article/deepfake-china-ai-scammers

Please rate this

1 thought on “The Deepfake Dilemma: When Technology Threatens Trust”

  1. The dystopian example is very scary but also feels like it could really turn into reality soon. Considering my parents lackluster technological skills, as they did not grow up during these rapid technological changes, I would really worry whether they woud be scammed. I do agree that the technology is very progressive and opens up a lot of opportunities especially in terms of hyper-personalized marketing, but I agree with you that there are many dangers of deepfakes. Fakenews appearing in users feeds will exponentially look more realistic, why is why I really think social meida should take responsibility and address this issue, e.g. by implementing automatic checks whether the post might be a deepfake (as this is quite difficult, providing the confidence whether it is a deepfake can be a good starting point). This would result in critically eyeing any information, which is both good and troublesome as you pointed out. I am scared to use these tools for personal usage or to prank friends, as I fear what my data (e.g. voice and image) could be used for. There have been uproars due to deepfake porn of celebrities before, and thinking about it deepfakes (and users creating them) can be really malicious.

Leave a Reply

Your email address will not be published. Required fields are marked *