Neural Networks: Invented Over Half a Century Ago but Unused for Decades

6

October

2020

No ratings yet.

Most of us are aware that artificial intelligence (AI) has been developing rapidly in the recent years. However, not many people realize that neural networks responsible for the vast majority of today’s AI improvements date back to the previous century. This brings up the question: why there was hardly any progress in the domain of AI in the 20th century, if the most powerful algorithm known today has already been around for years?

 

First, the performance of neural networks is highly dependent on the amount of training data. Training data is data used to train an algorithm to perform a certain task. For instance, a dataset of labeled pictures of cats and dogs, may be used to train an algorithm to distinguish cats from dogs on photos. In general, the more training data a neural network has the better it performs. Moreover, the more data is generated the higher the number of potential use cases, since training data is a prerequisite for any neural network or machine learning application. Nowadays, the amount of data generated every day is incomparably higher to the amount generated in the previous century. Some experts estimate that as of 2013, 90% of the world’s data was generated from 2011 to 2012 (SINTEF, 2013). The ubiquity of data enables wider use of the algorithms and increases their performance.

 

Second, to leverage large amounts of existing data sufficient computing power is needed. Luckily for AI development, performance of our computers has increased exponentially, while the cost of computing drastically decreased. In 1980 one would have to pay $1M for a device with computing power equivalent to that of iPad2. In 1950 the required amount would reach $1T (trillion) (Greenstoone and Looney, 2011). The ubiquity of powerful computing devices enabled the wide-scale utilization of available data, contributing to AI development.

 

As the amount of data generated every day is constantly increasing and our devices are becoming more powerful, we can expect even more AI applications in the future. Several decades had passed before neural networks showed how powerful they are. Maybe the next powerful technology has already been developed, but it will prove its value in a few decades, as was the case with neural networks.

 

 

 

 

References:

Cs.stanford.edu (2000). ‘Neural Networks – History’. Online. Accessed 06.10.2020 via: https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html

 

Greenstone, M. and Looney, A. (2011). ‘A Dozen Economic Facts About Innovation’. Online. Accessed 06.10.2020 via: https://www.brookings.edu/wp-content/uploads/2016/06/08_innovation_greenstone_looney.pdf

 

SINTEF. (2013). ‘Big Data, for better or worse: 90% of world’s data generated over last two years.’ ScienceDaily. Online. Accessed 06.10.2020 via: www.sciencedaily.com/releases/2013/05/130522085217.htm

 

 

Featured image source:

https://www.kdnuggets.com/2019/11/designing-neural-networks.html

Please rate this

1 thought on “Neural Networks: Invented Over Half a Century Ago but Unused for Decades”

  1. Hi Jan,

    You raise an interesting question at the end which I would like to respond to. Similar to neural networks, neuromorphic architecture is not a new concept, but was coined in the late 1980s. These architectures synergize with neural networks, because the structure of the algorithms fit with the structure of the hardware. I am personally convinced that the advent of neuromorphic systems will vastly increase the amount and the impact of use cases that neural networks have.

    (1) Closely connected processing and memory (at the edge) allows for application in a variety of (previously unfit) use cases, which means that we could gather more (and more granular) data. This could mean that, in 10-15 years time, deep learning will happen within e.g. your cellphone.
    (2) With its increased computing power, training data (if present in enough volume) given as input to neural networks could label itself, which would in turn lower the threshold to enterprise adoption since a tedious, human-labour aspect of training AI algorithms is taken away. This should also vastly exceed the current efficiency of neural network processing.
    (3) Connecting these systems to traditional architectures will give rise to heterogeneous systems, possibly cutting up tasks in (a) parts that require traditional computation, and (b) parts that require ‘associative’ computation (as in neural networks), once again improving overall efficiency and broadening the range of possible use cases.

    So I do believe that the next big technology in AI is already among us, but I don’t believe that it will replace neural networks. Instead, it will synergize with them.

    If you are interested in this topic and would like to read my sources, I kindly refer you to my own blog post: https://digitalstrategy.rsm.nl//2020/09/29/biomimicry-from-neural-networks-to-neural-architecture/

    Thanks for writing on an interesting topic!

Leave a Reply

Your email address will not be published. Required fields are marked *