Every year, some time in September, a lot of the world’s press turns its attention to California to see what new products and services Apple is going to announce. During the next week or two, the web is full of stories about the new iPhone or Macbook. The situation was similar, yet not as fanatic, with Microsoft’s Surface launch event earlier. Again, the hype was all about the hardware the tech firm is bringing to the market. Funny enough, when Google held its yearly event last year, the craze was actually not about new hardware products, but about a phone call done on stage. In it a software trained to make phone calls, was able to make a reservation at a hair salon, talking to a real human and without that human noticing (The Verge, 2018). Later on, it was revealed that the software is not yet able to handle actual open-ended conversations, but is still rather limited in the type of calls it can do, usually reservations or asking for opening hours. With this in the back of my mind I was startled when I read the news a couple of months ago: criminals had been able to sufficiently mimic a voice of a CEO (again using AI software) to make an employee transfer over $200,000 to their accounts (Wall Street Journal, 2019). What can you still trust? With deepfake videos (videos in which an AI creates a realistic fake video) getting better an better, voices being copied to perfection, how can we still consume content on the internet without always having to ask ourselves: Is this even real (New York Times, 2019). It is possible to open a bank account using video authentication, couldn’t someone just open a bank account in my name? Or worse register a new device to my already existing account? All of this is theoretical possible. Another question put forward in an opinion piece by the New York Times is, if everything can be faked, real videos could lose their power and people doing bad things getting caught on camera will be able to defend themselves by simply saying the video is a deepfake.
I am very fascinated by all this technology and especially all the possibilities to use AI, and luckily AI is not far enough to really fake everything, so we still have some time to think about how to develop these tools, to prevent wrongdoings or at least how to live with them.
New York Times (2019). Deepfakes: Is This Video Even Real?. Available at: https://www.youtube.com/watch?v=1OqFY_2JE1c
The Verge (2018). Google’s AI sounds like a human on the phone – should we be worried?. Available at: https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical- social-implications
Wall Street Journal (2019). Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. Available at: https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual- cybercrime-case-11567157402