The Deepfake Dilemma: When Technology Threatens Trust

29

September

2023

5/5 (1)

Imagine you’re receiving a video call from a family member and everything looks and sounds normal. But then, he asks you for money because otherwise, he will get in trouble. Something similar happened to a relative. His brother was video calling him on WeChat and asking for money. Everything looked and sounded normal; after all, it was his brother that he saw on the screen. However, he was reluctant because why would he suddenly ask for money? He didn’t transfer the money and ended the call. Afterwards, he called his brother on the phone and told him that he didn’t video call him at all. It turned out that the scammer used AI deepfake to impersonate the relative’s brother using the same voice and image. Luckily, my relative didn’t fall into the scammer’s trap as he was aware of this scam, however, some were unfortunate to get tricked. For example, a man in China transferred around $570,000 to a scammer using deepfake, thinking that he was helping a friend in a bidding project (Zhao, 2023).

AI deepfake has been on the rise as AI technology becomes more accessible. Because AI is developing rapidly, it is becoming increasingly challenging to spot a deep fake video and scammers use it to their advantage. But how does it actually work? Deepfakes take examples of audio or footage of someone and learning how to recreate their movements and voice accurately. All it takes is a few photos of the target’s face which can be taken from social media or a short video clip of less than 15 seconds to recreate a person’s voice (Chua, 2023). This raises ethical and privacy concerns as it can violate individuals’ privacy by creating fake videos and images that they did not have given permission for. Besides, what will the scammer do with your videos and images? What if they use them for other malicious purposes?

What’s interesting is that the owner of WeChat actually thinks that deepfakes could be good, emphasizing it as highly creative and groundbreaking technology. The owner gave a few examples of how deepfake can be applied in the present and in the future. For instance, deepfake can be used to let deceased actors appear in new movies or to generate voice-overs in different languages. Furthermore, deepfake can help patients affected by chronic illness. For example, deepfake allow people who has lost their voice to communicate through this technology (Hao, 2020). AI deepfake has the potential for positive applications, however, misuse of this technology for malicious intents is a significant concern. Although the owner of WeChat sees deepfake as technology that can be good, the question is how will they protect users from harm?

As technology continues to advance at a rapid pace, there is also a dilemma: it challenges our ability to separate truth from fiction while raising ethical and privacy concerns. In a world where a familiar face on a video call can no longer be taken at face value, you need to think twice and ask yourself if what you are seeing is real or fiction.

What are your opinions on AI deepfake? 

Sources:

Chua, N. (2023). Scammers use deepfakes to create voice recordings and videos to trick victims’ family, friends. https://www.straitstimes.com/singapore/scammers-use-deepfakes-to-create-voice-recordings-and-videos-of-victims-family-friends-to-trick-them

Entrepeneur. (2023). ‘We were sucked in’: How to protect yourself from deepfake phone scams. https://www.entrepreneur.com/science-technology/5-ways-to-spot-and-avoid-deepfake-phone-scams/453561

Hao, K. (2020). The owner of WeChat thinks deepfakes could actually be good. https://www.technologyreview.com/2020/07/28/1005692/china-tencent-wechat-ai-plan-says-deepfakes-good/#:~:text=The%20news%3A%20In%20a%20new,a%20highly%20creative%20and%20groundbreaking

Zhao, H. (2023). AI deepfakes are on the rise in China. https://radii.co/article/deepfake-china-ai-scammers

Please rate this

Acceptance of AI through a cultural lens

12

September

2023

No ratings yet.

Artificial intelligence (AI) has become a big part of our lives, yet the perceptions on the acceptance of AI seem to differ across countries. I came across an article stating that a few campsites in the Netherlands use facial recognition to provide customers access to the swimming pool, instead of letting staff check their card or wristband. However, what’s surprising is that one campsite had 300 customers and only three of them opted for this futuristic convenience. Most people were worried about the adoption of facial recognition because of privacy concerns. On the busy streets of China, such application of AI is common practice in public places and people seem to be more accepting. The strong difference between these two countries, raises intriguing questions, such as: How does culture shape our acceptance of AI? What kind of role does it play in the way we perceive and accept AI?

The acceptance of AI could be explained through two cultural dimensions from Geert Hofstede. For example, Lee & Joshi (2020) used uncertainty avoidance (UA) and collectivism/individualism. UA refers to the way that society deals with the uncertainty of the future and to which extent they feel threatened by unknown situations. Individuals from high UA culture are more likely to adopt AI, compared to those from low UA culture. The reason is that technological solutions appeal more to individuals from high uncertainty avoidance cultures, as they can increase predictability and are more likely to invest in technologies. However, people from individualistic cultures may not be as inclined as those from collectivistic cultures when it comes to depending on AI. A reason for this could be that the use of facial recognition in collectivistic countries, are perceived to benefit the society as a whole and may prioritize efficiency and convenience, which could lead to greater acceptance of AI. 

When organizations want to increase the adoption of AI, it is worthy to consider it from a cultural perspective. Do you think that culture has an influence on the acceptance of AI? Share your thoughts in the comments! 👇

Sources:

Hulsen, S. (2023). Steeds meer campings met gezichtsherkenning: handig, maar mag dit zomaar? https://www.rtlnieuws.nl/nieuws/nederland/artikel/5394988/steeds-meer-campings-met-gezichtsherkenning-zwembad

Hofstede Insights. (2023). Country Comparison Tool. https://www.hofstede-insights.com/country-comparison-tool?countries=china%2Cnetherlands

Lee, K., & Joshi, K. (2020). Understanding the Role of Cultural Context and User interaction in artificial intelligence based systems. https://www.tandfonline.com/doi/full/10.1080/1097198X.2020.1794131

Please rate this