Biomimicry: From Neural Networks to Neural Architecture
Biomimicry is not new; the philosophy is that nature has already solved some of the puzzling problems that we humans are facing today. Just looking at a few examples—bird beaks as inspiration for trains, gecko toes for adhesives, whales for wind turbines, and spiders for protective glass (Interesting Engineering, 2018)—leads us to indeed conclude that nature can help us solve difficult problems. However, what about our own nature; what problems could human biology inspire us to solve?
Well, there is one problem that we are facing in the field of computer science, which I am fairly sure you have heard of: the nearing ‘death’ of Moore’s law. Gordon E. Moore predicted, in 1965, that “the number of transistors per silicon chip doubles every year” (Brittanica, n.d.). However, we are nearing the limits of physics when it comes to scaling down transistors and packing them closer together on a traditional computer chip (Technology Review, 2020); going any denser would cause overheating issues.
There are plenty of problems that require vastly more computational power than we possess today, for example in the fields of physics, chemistry and biology, but also in more socio-technical contexts. Our traditional computer chips, built on Von Neumann architecture, do not pack the power to solve these and more problems; traditional chips even struggle with tasks as image and audio processing. Perhaps the biggest flaw is the infamous ‘Von Neumann bottleneck’.
This, amongst other reasons, has been inspiring researchers to pursue a different type of architecture, one that is more energy efficient, packs more processing power, and get rids of a particular bottleneck between processing and memory retrieval (but more on that below). One promising field of research is that of neuromorphic architecture: a design that mimics the architecture of the human brain.
Traditional chips
Von Neumann architectures – i.a. what your laptop and mobile phone is built on – have computer chips with master clocks that, at each tick, evaluate a binary input and pass on a binary output through the logic gates in the transistors of the chip. The processor can only be on or off and often stands idle while waiting to fetch information from the separated memory, which makes them very energy inefficient and gives rise to the ‘Von Neumann bottleneck’. The latter comes down to the problem that, no matter how much processing power grows, if ‘transfer rates’ (the rates at which memory is retrieved) stay the same, the latency will not improve (All About Circuits, 2020). This clock-driven, binary architecture stands in stark contrast to the architecture of neuromorphic chips.
Neuromorphic chips
Neuromorphic chips contain a spiking neural network, the artificial neurons within which are only activated when signals reach an activation threshold, remaining at a low power use-baseline otherwise. These signals, which are electric pulses, are fired when sensory input changes. The implications of a signal depends on the number of spikes within a certain period of time, as well as the design of that specific chip. These signals are gradient rather than binary, which means that they – via weighted values – can transfer more information per signal than a bit can. Overall, the design lends itself excellently for processing sensor data, including speech, image and radar inputs. We currently see that such sensory inputs are processed on neural networks, but on traditional architecture. The hardware of neuromorphic chips, as the name may give away, resembles neural networks, which adds the benefit of running AI models at vastly higher speeds than C/GPUs can (IEEE, 2017).
The artificial neurons of a neuromorphic chip, or synaptic cores, operate in parallel. This means that multiple neurons can be activated and activate other neurons at the same time (Psychology Today, 2019). This makes neuromorphic chips incredibly scalable—since you can increase the amount of artificial neurons—as well as fault-tolerant, since neurons can find other synaptic routes (via other neurons) when another neuron breaks. This mimics neuroplasticity in the human brain.
Another quintessential aspect of neuromorphic chips is that memory and computation are tightly coupled. Whereas traditional chips require external memory for non-volatility, this type of memory is inherent to the design of neuromorphic chips (IBM, n.d.). The artificial neurons within a chip are connected by memristors that resemble artificial synapses. These memristors pack non-volatile memory, because they ‘remember’ the electric charge that has previously flown through it, as well the direction in which it has been sent (MIT, 2020). This non-volatility means that memristors retain their information even after the device is shut off.
Players to watch
The neuromorphic computing industry is consolidated and built upon traditional computer engineering capabilities. In my view, there are three chips to watch in the field of neuromorphic computing: BrainChip’s Akida, IBM’s TrueNorth, and Intel’s Loihi.
- BrainChip’s Akida consists of 80 neuromorphic processing units that amount to 1.2 million neurons and 10 billion synapses (Nextplatform, 2020).
- IBM’s TrueNorth consists of 4096 cores that amount to 1 million neurons and 250 million synapses (CACM, 2020).
- Intel’s Pohoiki integrates 768 Loihi chips and amounts to 100 million neurons (Intel, 2020).
While Intel’s Pohoiki—a neuromorphic systemic aggregation of Loihi chips—is still in the research phase and only oriented to researchers, its 100 million neurons make it the most advanced neuromorphic system as of today (InsideHPC, 2020). It can do specific tasks up to 1,000 times faster and 10,000 times more efficiently than conventional processors (Intel, 2020). In terms of the amount of neurons inside, Intel’s Pohoiki resembles a small mammal. In addition, Intel (2020) claims that the neuromorphic system does not only fulfil AI purposes, but a wide range of computationally difficult problems.
Practical considerations
Neuromorphic chips are energy efficient, run AI models more efficiently than traditional architectures, are scalable, and reduce latency by tightly couple processing and memory. These properties make neuromorphic chips fit to run AI models at the edge rather than the cloud, which can be valuable for application in (i.a.) autonomous cars, industrial (IoT) environments, smart cities, cybersecurity, embedded video and audio, and in optimization problems such as minimal risk stock portfolios (Nextplatform, 2020; Intel, 2020). In addition, the energy efficient and compact design could enable deep learning to become embedded inside devices such as mobile phones. This could drastically improve natural language processing in day-to-day applications – just imagine Siri actually understanding your question and providing a helpful answer!
However, we are not there yet. There are still plenty of challenges, amongst which is developing the most efficient learning algorithms to be ran on the neuromorphic chips. Neuromorphic chips are still in their infancy, and overcoming technical hurdles will not be the only challenge (Analytics Insight, 2020); ethical concerns surrounding biomimicking hardware already exist, and should be expected to expected to intensify as the technology gains traction and its capabilities grow.
As of now, neuromorphic hardware is not commercially viable yet, but that does not mean we should not pay attention to it.
In the face of all this exciting uncertainty, I will conclude with some food for thought. Please let me know in the comments what your opinion are on (one of) the following three questions:
- Do you think neuromorphic chips possess potentially transformative power to the nature of work, or even our day-to-life? Why?
- What type of (business applications) do you see for hyper efficient neural network processing at the edge?
- Can you think of any problems that we have pushed forward along the uncertain and lengthy path of quantum computing research, that may be solved earlier by neuromorphic computing?
References
All About Circuits. (2020) https://www.allaboutcircuits.com/news/ai-chip-strikes-down-von-neumann-bottleneck-in-memory-neural-network-processing/ [Accessed September 25, 2020]
Analytics Insight. (2020) https://www.analyticsinsight.net/neuromorphic-computing-promises-challenges/ [Accessed September 28, 2020]
Britannica. (n.d.) https://www.britannica.com/technology/Moores-law/ [Accessed September 25, 2020]
CACM. (2020) https://cacm.acm.org/magazines/2020/8/246356-neuromorphic-chips-take-shape/fulltext [Accessed September 28, 2020]
IBM. (n.d.) https://www.zurich.ibm.com/sto/memory/ [Accessed September 26, 2020]
IEEE. (2017) https://spectrum.ieee.org/semiconductors/design/neuromorphic-chips-are-destined-for-deep-learningor-obscurity [Accessed September 26, 2020]
InsideHPC. (2020) https://insidehpc.com/2020/03/intel-scales-neuromorphic-system-to-100-million-neurons/ [Accessed September 28, 2020]
Intel. (2020) https://newsroom.intel.com/news/intel-scales-neuromorphic-research-system-100-million-neurons/ [Accessed September 28, 2020]
Interesting Engineering. (2018) https://interestingengineering.com/biomimicry-9-ways-engineers-have-been-inspired-by-nature [Accessed September 29, 2020]
MIT. (2020) https://news-mit-edu.eur.idm.oclc.org/2020/thousands-artificial-brain-synapses-single-chip-0608 [Accessed September 26, 2020]
Nextplatform. (2020) https://www.nextplatform.com/2020/01/30/neuromorphic-chip-maker-takes-aim-at-the-edge/ [Accessed September 28, 2020]
Psychology Today. (2019) https://www.psychologytoday.com/us/blog/the-future-brain/201902/neuromorphic-computing-breakthrough-may-disrupt-ai [Accessed September 26, 2020]
Technology Review. (2020) https://www.technologyreview.com/2020/02/24/905789/were-not-prepared-for-the-end-of-moores-law/ [Accessed September 25, 2020]