Biomimicry: From Neural Networks to Neural Architecture

29

September

2020

5/5 (3)

josh-riemer-OH5BRdggi2w-unsplash

Biomimicry: From Neural Networks to Neural Architecture

Biomimicry is not new; the philosophy is that nature has already solved some of the puzzling problems that we humans are facing today. Just looking at a few examples—bird beaks as inspiration for trains, gecko toes for adhesives, whales for wind turbines, and spiders for protective glass (Interesting Engineering, 2018)—leads us to indeed conclude that nature can help us solve difficult problems. However, what about our own nature; what problems could human biology inspire us to solve?

Well, there is one problem that we are facing in the field of computer science, which I am fairly sure you have heard of: the nearing ‘death’ of Moore’s law. Gordon E. Moore predicted, in 1965, that “the number of transistors per silicon chip doubles every year” (Brittanica, n.d.). However, we are nearing the limits of physics when it comes to scaling down transistors and packing them closer together on a traditional computer chip (Technology Review, 2020); going any denser would cause overheating issues.

There are plenty of problems that require vastly more computational power than we possess today, for example in the fields of physics, chemistry and biology, but also in more socio-technical contexts. Our traditional computer chips, built on Von Neumann architecture, do not pack the power to solve these and more problems; traditional chips even struggle with tasks as image and audio processing. Perhaps the biggest flaw is the infamous ‘Von Neumann bottleneck’.

This, amongst other reasons, has been inspiring researchers to pursue a different type of architecture, one that is more energy efficient, packs more processing power, and get rids of a particular bottleneck between processing and memory retrieval (but more on that below). One promising field of research is that of neuromorphic architecture: a design that mimics the architecture of the human brain.

Traditional chips

Von Neumann architectures – i.a. what your laptop and mobile phone is built on – have computer chips with master clocks that, at each tick, evaluate a binary input and pass on a binary output through the logic gates in the transistors of the chip. The processor can only be on or off and often stands idle while waiting to fetch information from the separated memory, which makes them very energy inefficient and gives rise to the ‘Von Neumann bottleneck’. The latter comes down to the problem that, no matter how much processing power grows, if ‘transfer rates’ (the rates at which memory is retrieved) stay the same, the latency will not improve (All About Circuits, 2020). This clock-driven, binary architecture stands in stark contrast to the architecture of neuromorphic chips.

Neuromorphic chips

Neuromorphic chips contain a spiking neural network, the artificial neurons within which are only activated when signals reach an activation threshold, remaining at a low power use-baseline otherwise. These signals, which are electric pulses, are fired when sensory input changes. The implications of a signal depends on the number of spikes within a certain period of time, as well as the design of that specific chip. These signals are gradient rather than binary, which means that they – via weighted values – can transfer more information per signal than a bit can. Overall, the design lends itself excellently for processing sensor data, including speech, image and radar inputs. We currently see that such sensory inputs are processed on neural networks, but on traditional architecture. The hardware of neuromorphic chips, as the name may give away, resembles neural networks, which adds the benefit of running AI models at vastly higher speeds than C/GPUs can (IEEE, 2017).

The artificial neurons of a neuromorphic chip, or synaptic cores, operate in parallel. This means that multiple neurons can be activated and activate other neurons at the same time (Psychology Today, 2019). This makes neuromorphic chips incredibly scalable—since you can increase the amount of artificial neurons—as well as fault-tolerant, since neurons can find other synaptic routes (via other neurons) when another neuron breaks. This mimics neuroplasticity in the human brain.

Another quintessential aspect of neuromorphic chips is that memory and computation are tightly coupled. Whereas traditional chips require external memory for non-volatility, this type of memory is inherent to the design of neuromorphic chips (IBM, n.d.). The artificial neurons within a chip are connected by memristors that resemble artificial synapses. These memristors pack non-volatile memory, because they ‘remember’ the electric charge that has previously flown through it, as well the direction in which it has been sent (MIT, 2020). This non-volatility means that memristors retain their information even after the device is shut off.

Players to watch

The neuromorphic computing industry is consolidated and built upon traditional computer engineering capabilities. In my view, there are three chips to watch in the field of neuromorphic computing: BrainChip’s Akida, IBM’s TrueNorth, and Intel’s Loihi.

  • BrainChip’s Akida consists of 80 neuromorphic processing units that amount to 1.2 million neurons and 10 billion synapses (Nextplatform, 2020).
  • IBM’s TrueNorth consists of 4096 cores that amount to 1 million neurons and 250 million synapses (CACM, 2020).
  • Intel’s Pohoiki integrates 768 Loihi chips and amounts to 100 million neurons (Intel, 2020).

While Intel’s Pohoiki—a neuromorphic systemic aggregation of Loihi chips—is still in the research phase and only oriented to researchers, its 100 million neurons make it the most advanced neuromorphic system as of today (InsideHPC, 2020). It can do specific tasks up to 1,000 times faster and 10,000 times more efficiently than conventional processors (Intel, 2020). In terms of the amount of neurons inside, Intel’s Pohoiki resembles a small mammal. In addition, Intel (2020) claims that the neuromorphic system does not only fulfil AI purposes, but a wide range of computationally difficult problems.

Practical considerations

Neuromorphic chips are energy efficient, run AI models more efficiently than traditional architectures, are scalable, and reduce latency by tightly couple processing and memory. These properties make neuromorphic chips fit to run AI models at the edge rather than the cloud, which can be valuable for application in (i.a.) autonomous cars, industrial (IoT) environments, smart cities, cybersecurity, embedded video and audio, and in optimization problems such as minimal risk stock portfolios (Nextplatform, 2020; Intel, 2020). In addition, the energy efficient and compact design could enable deep learning to become embedded inside devices such as mobile phones. This could drastically improve natural language processing in day-to-day applications – just imagine Siri actually understanding your question and providing a helpful answer!

However, we are not there yet. There are still plenty of challenges, amongst which is developing the most efficient learning algorithms to be ran on the neuromorphic chips. Neuromorphic chips are still in their infancy, and overcoming technical hurdles will not be the only challenge (Analytics Insight, 2020); ethical concerns surrounding biomimicking hardware already exist, and should be expected to expected to intensify as the technology gains traction and its capabilities grow.

As of now, neuromorphic hardware is not commercially viable yet, but that does not mean we should not pay attention to it.

In the face of all this exciting uncertainty, I will conclude with some food for thought. Please let me know in the comments what your opinion are on (one of) the following three questions:

  • Do you think neuromorphic chips possess potentially transformative power to the nature of work, or even our day-to-life? Why?
  • What type of (business applications) do you see for hyper efficient neural network processing at the edge?
  • Can you think of any problems that we have pushed forward along the uncertain and lengthy path of quantum computing research, that may be solved earlier by neuromorphic computing?

References

All About Circuits. (2020) https://www.allaboutcircuits.com/news/ai-chip-strikes-down-von-neumann-bottleneck-in-memory-neural-network-processing/ [Accessed September 25, 2020]
Analytics Insight. (2020) https://www.analyticsinsight.net/neuromorphic-computing-promises-challenges/ [Accessed September 28, 2020]
Britannica. (n.d.) https://www.britannica.com/technology/Moores-law/ [Accessed September 25, 2020]
CACM. (2020) https://cacm.acm.org/magazines/2020/8/246356-neuromorphic-chips-take-shape/fulltext [Accessed September 28, 2020]
IBM. (n.d.) https://www.zurich.ibm.com/sto/memory/ [Accessed September 26, 2020]
IEEE. (2017) https://spectrum.ieee.org/semiconductors/design/neuromorphic-chips-are-destined-for-deep-learningor-obscurity [Accessed September 26, 2020]
InsideHPC. (2020) https://insidehpc.com/2020/03/intel-scales-neuromorphic-system-to-100-million-neurons/ [Accessed September 28, 2020]
Intel. (2020) https://newsroom.intel.com/news/intel-scales-neuromorphic-research-system-100-million-neurons/ [Accessed September 28, 2020]
Interesting Engineering. (2018) https://interestingengineering.com/biomimicry-9-ways-engineers-have-been-inspired-by-nature [Accessed September 29, 2020]
MIT. (2020) https://news-mit-edu.eur.idm.oclc.org/2020/thousands-artificial-brain-synapses-single-chip-0608 [Accessed September 26, 2020]
Nextplatform. (2020) https://www.nextplatform.com/2020/01/30/neuromorphic-chip-maker-takes-aim-at-the-edge/ [Accessed September 28, 2020]
Psychology Today. (2019) https://www.psychologytoday.com/us/blog/the-future-brain/201902/neuromorphic-computing-breakthrough-may-disrupt-ai [Accessed September 26, 2020]
Technology Review. (2020) https://www.technologyreview.com/2020/02/24/905789/were-not-prepared-for-the-end-of-moores-law/ [Accessed September 25, 2020]

 

 

 

 

 

 

 

 

 

 

 

 

 

Please rate this

2 thoughts on “Biomimicry: From Neural Networks to Neural Architecture”

  1. Hi Wesley,

    I’ve found your comment to my post very thought-provoking and I decided to read more about neuromorphic chips in your blog post. Before I answer your question, I would like to note that I appreciate your down-to-earth approach. You are aware of the Moore’s law limitations and you don’t extrapolate it blindly into the future, as most people nowadays do. Also, it seems that you are not overly optimistic about quantum computers, which is rare in nowadays world where almost everyone seems to follow the fad.

    I would like to share my opinion regarding one of the questions that you asked, namely: What type of (business applications) do you see for hyper efficient neural network processing at the edge?

    The first thing that comes to my mind are the applications which are processing sensitive or private data. I would strongly prefer to have my health information data processed and stored on my device, rather than in a centralized cloud. This would increase my privacy and security of my data. Also, I would trust Amazon’s Alexa and similar assistants more, if they were processing my speech data at the edge rather than in the cloud. I think that edge processing enabled by neuromorphic hardware will create many opportunities for companies to create applications that need to take into account privacy and security of the users’ data.

    Your blog post was a very good read!

  2. Hello Wesley. It has been a lot fun, time-consuming, and even a challenge to (partly obsessive) read up on the topic. Your article, which I read four days ago, introduced me to the topic and this comment is superseded by a lot of research in these days. Your article seems to check out with whatever sources I could find, so instead of commenting fallacies, I want to dive deeper into the subject. In this comment I’ll take you through some thoughts that made things clear to me and what I’m still fuzzy about. Maybe you’ll learn something, maybe I will. Either way you’ll probably detect my omnipresent errors.

    To start, I understand neuromorphic computing is about designing hardware like brains, for neural networks. Meaning we take the transition from simulation to emulation. There are three main components: neuron, synapse and network. All of which need a model. In the literature there is a difference between analog and digital implementation of a network model. This difference means that a network model, aka one of the innumerable ANN’s, is either implemented physically into the hardware or some general model is the hardware’s structure and gets programmed digitally. A similar distinction, but than for the weights/synapses, is the difference between online learning vs static weights. Meaning some synapses are thinner/wider [confusion noises], which indicates their weights. Have you encountered these differences in your investigation of the topic? If yes, could you elaborate or offer some analogy? [source 1]

    I’ve seen many network models, understood few, but all of them seem on a spectrum of biological inspiration. Meaning the system looks more or less like the biological brain. I have no doubts you found that the terms biologically plausible and biologically inspired are not used interchangeably. My question: are all neuromorphic computer systems spiking neural networks, or is this just a (very popular) category of neural networks that looks the most like a biological brain and, therefore, many other type of neural networks exist in neuromorphic computing. For example, would a feed-forward neural network be a good network model design? Following up on this, one could imagine that for neuroscience, you’d prefer a biologically plausible model. While for other goals, like image recognition, neural networks have already proven to be very effective and a biologically inspired model might suffice. My question: is this way of thinking, the biological inspiration degree, correct? And if yes, where on this spectrum could we expect a general purpose machine like laptops and phones? I could go on for days asking questions about this subject, though this is more than enough for now. [Source 2]

    Do you think neuromorphic chips possess potentially transformative power to the nature of work, or even our day-to-life? Why? If one takes a moment to think of all the applications, he can come up with an abundance of examples in which daily life can change. A personal favorite example that came to mind is how the systems energy savings can impact things. I love pondering at the idea that I won’t have to charge my phone for a week. And all the environmental benefits of course. Also the availability of extremely (energy) efficient ML, could impact and shake up AWS.

    Source 1 (use google and it should pop up): (Huynh, 2016) Exploration of dynamic communication networks for Neuromorphic Computing. Master thesis TU/e
    Source 2: (Schuman et al, 2017) A Survey of Neuromorphic Computing and Neural Networks in Hardware. URL: https://arxiv.org/pdf/1705.06963.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *