Neural Networks: Invented Over Half a Century Ago but Unused for Decades

6

October

2020

No ratings yet.

Most of us are aware that artificial intelligence (AI) has been developing rapidly in the recent years. However, not many people realize that neural networks responsible for the vast majority of today’s AI improvements date back to the previous century. This brings up the question: why there was hardly any progress in the domain of AI in the 20th century, if the most powerful algorithm known today has already been around for years?

 

First, the performance of neural networks is highly dependent on the amount of training data. Training data is data used to train an algorithm to perform a certain task. For instance, a dataset of labeled pictures of cats and dogs, may be used to train an algorithm to distinguish cats from dogs on photos. In general, the more training data a neural network has the better it performs. Moreover, the more data is generated the higher the number of potential use cases, since training data is a prerequisite for any neural network or machine learning application. Nowadays, the amount of data generated every day is incomparably higher to the amount generated in the previous century. Some experts estimate that as of 2013, 90% of the world’s data was generated from 2011 to 2012 (SINTEF, 2013). The ubiquity of data enables wider use of the algorithms and increases their performance.

 

Second, to leverage large amounts of existing data sufficient computing power is needed. Luckily for AI development, performance of our computers has increased exponentially, while the cost of computing drastically decreased. In 1980 one would have to pay $1M for a device with computing power equivalent to that of iPad2. In 1950 the required amount would reach $1T (trillion) (Greenstoone and Looney, 2011). The ubiquity of powerful computing devices enabled the wide-scale utilization of available data, contributing to AI development.

 

As the amount of data generated every day is constantly increasing and our devices are becoming more powerful, we can expect even more AI applications in the future. Several decades had passed before neural networks showed how powerful they are. Maybe the next powerful technology has already been developed, but it will prove its value in a few decades, as was the case with neural networks.

 

 

 

 

References:

Cs.stanford.edu (2000). ‘Neural Networks – History’. Online. Accessed 06.10.2020 via: https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html

 

Greenstone, M. and Looney, A. (2011). ‘A Dozen Economic Facts About Innovation’. Online. Accessed 06.10.2020 via: https://www.brookings.edu/wp-content/uploads/2016/06/08_innovation_greenstone_looney.pdf

 

SINTEF. (2013). ‘Big Data, for better or worse: 90% of world’s data generated over last two years.’ ScienceDaily. Online. Accessed 06.10.2020 via: www.sciencedaily.com/releases/2013/05/130522085217.htm

 

 

Featured image source:

https://www.kdnuggets.com/2019/11/designing-neural-networks.html

Please rate this

How Can Machine Learning Innovation Invented Over a Beer Make Us Not Know What Is Real?

13

September

2020

No ratings yet.

How it all started?

In 2014 Ian Goodfellow went with his friends to a bar in Montreal to discuss an idea of a software that would be able to create photos by itself. Some researchers had already tried to use neural networks to achieve this task before, however the results weren’t of a good quality. Goodfellow’s friends hoped that using complex statistical analysis would solve the problem, but he disagreed. Instead, he came up with the idea of pairing two neural networks against each other. One of them, a generator, would generate images and another called discriminator would be responsible for distinguishing between real pictures and outputs of the generator. Once Goodfellow got home the same evening he developed a first model, which worked the first time. This is how generative adversarial networks (GANs) were born.

 

Zrzut ekranu 2020-09-12 o 17.39.08

What is real?

Today GANs have multiple applications ranging from faces generation, to cartoon characters generation, to text-to-image translation. As can be seen on the above picture, the technology advanced really fast and nowadays generated images are indistinguishable from the real ones. If you want to test yourself, try to classify images below. The answers are included below the references.

Zrzut ekranu 2020-09-11 o 16.04.13

 

GANs use cases are not limited to generating pictures, in fact there are a number of videos on the internet called deepfakes which are usually generated with the use of GANs. In such videos, the face of an individual is swapped with the face of another person. The videos look so real that it takes some time to realize that you are looking at a computer-generated video. The quality of deepfakes has improved in recent years and the trend is likely to continue. The technology has been used to generate videos such as the ones below.

 

 

Why should we care?

Although it’s entertaining to see Will Smith as Neo or Hitler and Stalin singing Radio Star, deepfakes can also pose a serious threat. For instance, the technology has been used to generate Fake News which may be almost impossible to distinguish from the real news, especially for a person who is not aware that such technology exists. Another possible threat is the impact that deepfakes have on our belief in the authenticity of videos. Nowadays videos are generally accepted as evidence in courts. However, it is theoretically possible for someone to swap out your face with a criminal’s face in a video and try to convince others that you committed the crime. There is also a risk that real criminals who were captured on camera while committing a crime won’t be convicted, because videos would no longer be considered reliable evidences.

 

Researchers are coming up with new methods of distinguishing deepfakes, for instance by analysing eye blinks or heartbeat in generated videos yet deepfakes are continuously getting better, hence more difficult to detect. It remains to be seen whether we will trust the films we see or will we doubt in their authenticity. I encourage you to share your thoughts regarding other malicious uses of this technology and possible ways to mitigate its negative effects in the comments.

 

 

 

 

References:

Giles, M. (2020, April 02). The GANfather: The man who’s given machines the gift of imagination. Retrieved September 12, 2020, from https://www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., . . . Amodei, D. (2018, February 20). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Retrieved September 12, 2020, from https://arxiv.org/abs/1802.07228

Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018, February 26). Progressive Growing of GANs for Improved Quality, Stability, and Variation. Retrieved September 12, 2020, from https://arxiv.org/abs/1710.10196

Jin, Y., Zhang, J., Li, M., Tian, Y., Zhu, H., & Fang, Z. (2017, August 18). Towards the Automatic Anime Characters Creation with Generative Adversarial Networks. Retrieved September 12, 2020, from https://arxiv.org/abs/1708.05509

Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. (2017, August 05). StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks. Retrieved September 12, 2020, from https://arxiv.org/abs/1612.03242

Albahar, M., & Almalki, J. (1970, January 01). DEEPFAKES: THREATS AND COUNTERMEASURES SYSTEMATIC REVIEW: Semantic Scholar. Retrieved September 12, 2020, from https://www.semanticscholar.org/paper/DEEPFAKES:-THREATS-AND-COUNTERMEASURES-SYSTEMATIC-Albahar-Almalki/cd1cbbe9b7e5cb47c9f3aaf1b475d4694d9b2492

https://www.youtube.com/watch?v=1h-yy3h1u04   Retrieved September 12, 2020

https://www.youtube.com/watch?v=25GjijODWoI   Retrieved September 12, 2020

 

Answer: all pictures were generated by GANs

 

Please rate this