Deepfakes: A New Cybersecurity Threat to Business?

8

October

2019

5/5 (2)

Cyberattacks are nothing new and can happen to every company. These attacks can take many forms, from simple DDOS attacks and phishing attempts to more sophisticated attacks like the recent WannaCry ransomware attacks. And to no surprise, these cyber-attacks can have serious revenue impact. Smaller companies can lose over $100,000 per ransomware incident due to downtime (CNN Business, 2017). Even a simple phishing attack can cost a larger corporation at least $1.6 million a year (VadeSecure, 2015).
Recent developments show that cyber threats are getting more complex. The technology behind these attacks is getting more advanced and the systems to detect the cyber threats are falling behind.

Deepfakes: A new corporate threat
One of the newest potential cybersecurity threats to business is the ‘deepfake’ technology. The impact of deepfakes on society has been widely discussed in the field, looking at e.g. fake news, however the impact on business has not been discussed as widely. Recently, in the first known case of a deepfake attack on business, criminals were able to fool a company into wire transferring over $240.000. The CEO of an energy company thought he was speaking via phone to his boss, the chief executive of the firm’s German parent company. The caller on the phone asked the CEO to send the funds to a Hungarian supplier in an “urgent” request, with the promise that it would be reimbursed (Threatpost, 2019). The thing is, that in this case, the CEO was not actually speaking to his boss, but to criminals that were able to manipulate the voice and accent of his boss, using AI-powered audio deepfakes technology.

What are deepfakes?
The term ‘deepfakes’ first appeared on Reddit, an online social media platform, where a user produced fake pornographic videos of celebrities, copying their faces on actors using off-the-shelf AI products. Since the appearance on Reddit, deepfakes have been applied to a broad range of video and imagery. This includes but is not limited to face swaps, audio deepfakes, facial re-enactment and deepfake lip-synching. As the deepfakes technology is still developing, there is no clear set-in-stone definition yet. In essence, the term ‘deepfake’ can be defined as a “subset of fake video that leverages deep learning to make the faking process easier” (The Verge, 2019). Deep learning (or machine learning) is the key here. Using machine learning, the algorithm used to create deepfakes improves itself, making it increasingly harder to detect deepfakes.

Detecting deepfakes
As becomes clear from the CEO fraud example, deepfakes can have a real impact on business and revenue. But is there a way to detect these deepfakes, so companies can protect themselves? Well, not yet. At the moment, the industry does not have a method to detect them.
However, researchers and tech companies are contributing to the detection of deepfakes. Google released a large dataset of visual deepfakes to help build tools able to detect them (Google, 2019). Similarly, Facebook partnered with Microsoft, the Partnership on AI, and academics from a broad range of universities to build the Deepfake Detection Challenge (DFDC) with the goal to produce technology that everyone can use to detect when AI has been used to alter a video (Facebook, 2019).
Unfortunately, this may not be the solution to the problem. As the technology to detect deepfakes improves, the deepfakes themselves will most likely improve faster than methods used to detect them. Human intelligence and expertise will still be needed to identify deepfakes for the foreseeable future (Wired, 2019).

How companies can protect themselves
As there is no technology available yet to detect deepfakes, deepfakes are here to stay. Similar to computer viruses and cyberattacks, companies will have to learn how to defend themselves.
The defense starts with awareness. Employees need to be educated about the danger of deepfakes, how to detect them and how to report them (Symantec, 2019). Deepfake detection should become an integral part of the employee onboarding program and corporate security training.

Until the technology to detect deepfakes is there, companies should be aware that deepfakes can become a real threat.

 

Appendix

CNN Business (2017). Why ransomware costs small business big money. Retrieved from: https://money.cnn.com/2017/07/27/technology/business/ransomware-malwarebytes/index.html

Facebook (2019). Creating a data set and a challenge for deepfakes. Retrieved from: https://ai.facebook.com/blog/deepfake-detection-challenge/

Google (2019). Contributing Data to Deepfake Detection Research. Retrieved from: https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html

Symantec (2019). Here’s How Deepfakes Can Harm Your Enterprise — and What to Do About Them. Retrieved from: https://www.symantec.com/blogs/feature-stories/heres-how-deepfakes-can-harm-your-enterprise-and-what-do-about-them.

The Verge (2019). Why we need a better definition of ‘deepfake’. Retrieved from: https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai-manipulation-fake-news

Threatpost (2019). CEO ‘Deep Fake’ Swindles Company Out of $234K. Retrieved from: https://threatpost.com/deep-fake-of-ceos-voice-swindles-company-out-of-243k/147982/

Vadesecure (2015). The corporate impact of phishing. Retrieved from: https://www.vadesecure.com/en/the-corporate-impact-of-phishing/

Wired (2019). Even the AI Behind Deepfakes Can’t Save Us From Being Duped. Retrieved from: https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/

Please rate this

‘Hey Google, are you listening to me?’

9

September

2019

5/5 (6)

 

Last week I had to make a call to the Coolblue customer service to make a return. Upon dialling the customer support number, I heard a friendly voice saying:

‘This call may be recorded for quality and training purposes’.
Well, of course I am happy to help new employees out and improve customer service, and I consent by staying on the line. Sounds easy right?

In the age of digital assistants and the Internet of Things, it is not always as clear as the above example and we do not always know who is listening to you and when.
The AI-powered virtual assistants of Google, Amazon, Apple and Microsoft were introduced to make life easier. Want to know what the weather will be for today? Just ask. Is there going to be traffic on your way to work? Ask Google. To make things even better, the devices are built with machine-learning capabilities. At the moment, the error rate of facial and speech cognition in machine-learning products is even lower than the error rate of actual human beings (Brynjulfsson and Mcafee, 2017). The smart assistants can learn your daily routine, play your favourite songs and call your grandmother, but at what cost?

Recently, the Belgian broadcaster VRT was able to obtain over 1,000 audio files from a Google contractor who was hired by the corporation to review audio captured by the Google Assistant from devices including smart speakers, phone and security cameras. After listening to an audio file where a couple was talking with their son and baby grandchild, Tim Verheyden, a journalist in VRT was successfully able to locate the couple in the audio file. But to his surprise, the Google Assistant did not only record conversations asked after triggering the AI with ‘Hey Google’, but also randomly. The smart devices recorded sensitive medical information, physical violence and even sexual intercourse (WIRED, 2019)

Privacy-by-design
These developments raise the question of how companies can clear-up the privacy cloud currently hanging over digital assistants. As a starter, the companies producing digital assistants should engineer the products with a privacy-by-design structure. For example, Privacy-minded Apple retains voice queries but decouples them from your name or user ID. The company tags them with a random string of numbers unique to each user. Then, after six months, even the connection between the utterance and the numerical identifier is eliminated (The Guardian, 2019).

The ethics around digital assistants remain a topic of discussion. How much of your privacy are you willing to give up to improve AI in digital assistants?

References
Brynjolfsson, E., and Mcafee, A. (2017). The business of artificial intelligence: what it can and cannot do for your organization. Harvard Business Review.
The Guardian (2019). Smart talking: are our devices threatening our privacy. Retrieved from: https://www.theguardian.com/technology/2019/mar/26/smart-talking-are-our-devices-threatening-our-privacy
Wired (2019). Who’s Listening When You Talk To Your Google Assistant?. Retrieved from
https://www.wired.com/story/whos-listening-talk-google-assistant/

Please rate this