In the new Starwars series The Mandalorian, there is an iconic scene with Mark Hamill where he appears a young 30 even though he is 70 years of age. Disney used Digital techniques to de-age Mark Hamill to appear as he did during the first Starwars films some 40 years ago. Now, what does this have to do with deepfakes? Lucas film used an entire team of VFX producers to de-age mark’s character and the result was a wax doll looking version of the character. In just four days one person using deepfake technology was able to create a better-looking scene and was consequently hired by Lucas film. In the video below you can see for yourself the power of this technology.
As you just saw deepfake technology gives filmmakers tremendous opportunities to do what only was reserved for large film studio’s. This gives filmmakers around the world the possibility to make high-end visual effects without the need for large studio technologies or backing.
However, there are downsides to this technology as well. Even though this is a golden gift for the film industry malicious parties could use this technology to slander and/or discredit public figures, and as deepfakes use machine learning they keep improving themselves and will become indistinguishable from reality. The following video is from 2018 and since then the quality of deepfakes has only improved.
Now, what can we do to battle against this ever-increasing threat of mis- and dis-information. In 2019 a paper was released called Faceforensics++ in which the researchers used machine learning to create a deepfake detector by using four deepfake generators run over their data set of a thousand video’s. In this way, the deepfake detector got trained to detect several kinds of deepfakes and in high resolution, it could detect deepfakes with an accuracy over 99%, however, in low-resolution video’s it would go down to 51,80%. (Rössler, et al., 2019) This reveals a weakness of the technology as online videos are compressed and changed numerously. Additionally, the detection software allows deepfake machine learning to keep improving itself based on where faults were found. A true arms race between deep fake generation and detection. As one improves the other will inevitably improve along with it. The only way to protect ourselves against such technology is proactively developing deepfakes and their detectors, in order to stay ahead of those who would use them to do harm.
Refernce : Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J. and Nießner, M., 2019. FaceForensics++: Learning to Detect Manipulated Facial Images. arXiv (2019).
Very interesting post Joost! The harm that the technology of deep fake can do should not be underestimated. It is indeed another factor in causing confusion about what is valid information and what is not. I like the thought of investing in continuously improving the deepfakes in order to stay ahead of ‘the enemy’. In my opinion, this will be more effective than just sitting back and only try to follow the improvements with developing the detection technology. However, this raises the question whether we should be willing to help those that would use them to do harm, to become better at generating deep fakes, as they might not have been able to reach the same advanced level if they completely had to develop it by themselves.