How it all started?
In 2014 Ian Goodfellow went with his friends to a bar in Montreal to discuss an idea of a software that would be able to create photos by itself. Some researchers had already tried to use neural networks to achieve this task before, however the results weren’t of a good quality. Goodfellow’s friends hoped that using complex statistical analysis would solve the problem, but he disagreed. Instead, he came up with the idea of pairing two neural networks against each other. One of them, a generator, would generate images and another called discriminator would be responsible for distinguishing between real pictures and outputs of the generator. Once Goodfellow got home the same evening he developed a first model, which worked the first time. This is how generative adversarial networks (GANs) were born.
What is real?
Today GANs have multiple applications ranging from faces generation, to cartoon characters generation, to text-to-image translation. As can be seen on the above picture, the technology advanced really fast and nowadays generated images are indistinguishable from the real ones. If you want to test yourself, try to classify images below. The answers are included below the references.
GANs use cases are not limited to generating pictures, in fact there are a number of videos on the internet called deepfakes which are usually generated with the use of GANs. In such videos, the face of an individual is swapped with the face of another person. The videos look so real that it takes some time to realize that you are looking at a computer-generated video. The quality of deepfakes has improved in recent years and the trend is likely to continue. The technology has been used to generate videos such as the ones below.
Why should we care?
Although it’s entertaining to see Will Smith as Neo or Hitler and Stalin singing Radio Star, deepfakes can also pose a serious threat. For instance, the technology has been used to generate Fake News which may be almost impossible to distinguish from the real news, especially for a person who is not aware that such technology exists. Another possible threat is the impact that deepfakes have on our belief in the authenticity of videos. Nowadays videos are generally accepted as evidence in courts. However, it is theoretically possible for someone to swap out your face with a criminal’s face in a video and try to convince others that you committed the crime. There is also a risk that real criminals who were captured on camera while committing a crime won’t be convicted, because videos would no longer be considered reliable evidences.
Researchers are coming up with new methods of distinguishing deepfakes, for instance by analysing eye blinks or heartbeat in generated videos yet deepfakes are continuously getting better, hence more difficult to detect. It remains to be seen whether we will trust the films we see or will we doubt in their authenticity. I encourage you to share your thoughts regarding other malicious uses of this technology and possible ways to mitigate its negative effects in the comments.
References:
Giles, M. (2020, April 02). The GANfather: The man who’s given machines the gift of imagination. Retrieved September 12, 2020, from https://www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., . . . Amodei, D. (2018, February 20). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Retrieved September 12, 2020, from https://arxiv.org/abs/1802.07228
Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018, February 26). Progressive Growing of GANs for Improved Quality, Stability, and Variation. Retrieved September 12, 2020, from https://arxiv.org/abs/1710.10196
Jin, Y., Zhang, J., Li, M., Tian, Y., Zhu, H., & Fang, Z. (2017, August 18). Towards the Automatic Anime Characters Creation with Generative Adversarial Networks. Retrieved September 12, 2020, from https://arxiv.org/abs/1708.05509
Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. (2017, August 05). StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks. Retrieved September 12, 2020, from https://arxiv.org/abs/1612.03242
Albahar, M., & Almalki, J. (1970, January 01). DEEPFAKES: THREATS AND COUNTERMEASURES SYSTEMATIC REVIEW: Semantic Scholar. Retrieved September 12, 2020, from https://www.semanticscholar.org/paper/DEEPFAKES:-THREATS-AND-COUNTERMEASURES-SYSTEMATIC-Albahar-Almalki/cd1cbbe9b7e5cb47c9f3aaf1b475d4694d9b2492
https://www.youtube.com/watch?v=1h-yy3h1u04 Retrieved September 12, 2020
https://www.youtube.com/watch?v=25GjijODWoI Retrieved September 12, 2020
Answer: all pictures were generated by GANs