Imagine you’re receiving a video call from a family member and everything looks and sounds normal. But then, he asks you for money because otherwise, he will get in trouble. Something similar happened to a relative. His brother was video calling him on WeChat and asking for money. Everything looked and sounded normal; after all, it was his brother that he saw on the screen. However, he was reluctant because why would he suddenly ask for money? He didn’t transfer the money and ended the call. Afterwards, he called his brother on the phone and told him that he didn’t video call him at all. It turned out that the scammer used AI deepfake to impersonate the relative’s brother using the same voice and image. Luckily, my relative didn’t fall into the scammer’s trap as he was aware of this scam, however, some were unfortunate to get tricked. For example, a man in China transferred around $570,000 to a scammer using deepfake, thinking that he was helping a friend in a bidding project (Zhao, 2023).
AI deepfake has been on the rise as AI technology becomes more accessible. Because AI is developing rapidly, it is becoming increasingly challenging to spot a deep fake video and scammers use it to their advantage. But how does it actually work? Deepfakes take examples of audio or footage of someone and learning how to recreate their movements and voice accurately. All it takes is a few photos of the target’s face which can be taken from social media or a short video clip of less than 15 seconds to recreate a person’s voice (Chua, 2023). This raises ethical and privacy concerns as it can violate individuals’ privacy by creating fake videos and images that they did not have given permission for. Besides, what will the scammer do with your videos and images? What if they use them for other malicious purposes?
What’s interesting is that the owner of WeChat actually thinks that deepfakes could be good, emphasizing it as highly creative and groundbreaking technology. The owner gave a few examples of how deepfake can be applied in the present and in the future. For instance, deepfake can be used to let deceased actors appear in new movies or to generate voice-overs in different languages. Furthermore, deepfake can help patients affected by chronic illness. For example, deepfake allow people who has lost their voice to communicate through this technology (Hao, 2020). AI deepfake has the potential for positive applications, however, misuse of this technology for malicious intents is a significant concern. Although the owner of WeChat sees deepfake as technology that can be good, the question is how will they protect users from harm?
As technology continues to advance at a rapid pace, there is also a dilemma: it challenges our ability to separate truth from fiction while raising ethical and privacy concerns. In a world where a familiar face on a video call can no longer be taken at face value, you need to think twice and ask yourself if what you are seeing is real or fiction.
What are your opinions on AI deepfake?
Sources:
Chua, N. (2023). Scammers use deepfakes to create voice recordings and videos to trick victims’ family, friends. https://www.straitstimes.com/singapore/scammers-use-deepfakes-to-create-voice-recordings-and-videos-of-victims-family-friends-to-trick-them
Entrepeneur. (2023). ‘We were sucked in’: How to protect yourself from deepfake phone scams. https://www.entrepreneur.com/science-technology/5-ways-to-spot-and-avoid-deepfake-phone-scams/453561
Hao, K. (2020). The owner of WeChat thinks deepfakes could actually be good. https://www.technologyreview.com/2020/07/28/1005692/china-tencent-wechat-ai-plan-says-deepfakes-good/#:~:text=The%20news%3A%20In%20a%20new,a%20highly%20creative%20and%20groundbreaking
Zhao, H. (2023). AI deepfakes are on the rise in China. https://radii.co/article/deepfake-china-ai-scammers