EDGE Insights

EDGE Insights

icon
Filter
Last week:
Neuromorphic Computing

Neuromorphic Computing

Neuromorphic Computing: The future brains of computing

This Insight was last updated in November 2022; for an up-to-date market map and the latest funding information, please refer the Neuromorphic Computing industry hub

In the ongoing quest to develop the world’s most powerful computer, one contender still leads the way; and no, it’s not Frontier, Fugaku, or LUMI—the leading supercomputers of today. We’re talking about the human brain—the most powerful processing unit we’ve ever encountered. It is estimated to be capable of 1 exaFLOP of calculations per second (i.e., 1018 calculations), which is a benchmark only achieved by supercomputers in 2022 (Frontier). However, Frontier required over 21 MW of electricity to maintain this performance level, compared to the brain, which needs anywhere between 12 to 25 watts. In essence, neuromorphic computing aims to resemble the human brain (following aspects of biological neural networks) to perform calculations.
Innovation in the neuromorphic space is being led by large chip manufacturers such as Intel and Qualcomm, which are in the process of developing neuromorphic chips for commercial sale. Additionally, a handful of startups are exploring the neuromorphic space, in terms of chip design and architecture as well as potential use cases for neuromorphic chips across industries. Over USD 600 million has been raised so far by these startups in the neuromorphic space. Whilst there have been bigger investments and interest in companies exploring technologies such as Quantum Computing, neuromorphic computing is significantly closer to commercialization with Intel launching a commercially available circuit board (Kapoho Point) in September 2022, incorporating eight neuromorphic chips.
In this Insight, we take a detailed look into neuromorphic computing, including the technical benefits versus existing architectures, potential use cases, demand drivers, and a long-term view of the industry.

What is neuromorphic computing?

Simply put, neuromorphic computing is a form of computing that most closely resembles the human brain in terms of how calculations are performed. For comparison, traditional computing follows the von Neumann architecture, which was first published by John von Neumann in 1945 in the “First Draft of a Report on the EDVAC.” 
A key feature of any von Neumann-based computer is that the fetching of instructions and data operations cannot happen at the same time due to the separation of memory and the central processing unit (CPU). Any data operation must pass through ‘the bus’ (the communication system that transfers data between components) between the CPU and memory, as shown in the diagram below. This limitation is known as the von Neumann bottleneck and impacts the performance of computers today as well as the programming of software, since programs have to adjust their coding to improve the efficiency of data transfer through the bus.

Von Neumann architecture vs. the human brain vs. Neuromorphic computing

Neuromorphic - Comparison with Von Neumann and the brain
Source: Photonix, IBM
Meanwhile, the human brain uses neural networks, which consist of a series of neurons that are inter connected which exchange data via electrical signals. These signals are passed around the network via synapses (the contact points between neurons). Unlike von Neumann architecture, neural networks can analyze the inputs received at the neuron level and adjust the output sent accordingly, without the need for a connecting bus between CPU and memory. This has a significant performance benefit in terms of the speed of analysis and the ability to handle more complex tasks. Additionally, neural networks are able to handle data in more complex forms than binary (1 and 0), including numbers (0–9) and the alphabet (A–Z).   
Neuromorphic computing seeks to replicate the human brain through artificial neural networks (also known as deep neural networks). These networks are typically designed based on the Rosenblatt perceptron (a probabilistic model for information storing in the human brain), with three key layers. The input layer receives the data, which is then analyzed and passed through the hidden layer/s. This determines which electrical signals to pass through based on the logic that the network has been trained on. The output layer provides the answer of the operation.

Neuromorphic computing vs. quantum computing

A common comparison of future computing technologies for AI is between neuromorphic computing and quantum computing. Whilst both technologies are considered next-gen, there are significant differences in the way they achieve their performance improvements. Primarily, the manner in which neuromorphic computers work in terms of operating temperature makes it easier to implement commercially in the short term, though some quantum computing startups are exploring how to make quantum computing viable at room temperatures. 
Additionally, the use cases vary, as neuromorphic computing can be used for different types of AI that are different from current AI models. However, quantum computing would stick to traditional AI models but enable significantly more complex computations.

Comparison between neuromorphic and quantum computing

Neuromorphic computing to usher in a new age of AI

The increased focus on developing neuromorphic computing is primarily being driven by the ability to handle complex tasks using AI. Specifically, present-day AI is heavily based on rules that guide the decision-making process for any task or operation. However, future AI built on neuromorphic computing would be able to handle ambiguous tasks and learn from datasets, much like how the human brain is able to adapt in a flexible manner when challenged.
One example of the application of neuromorphic computers in AI is a partnership between Intel and Cornell University, which used Intel’s Loihi chip to recognize 10 different odors. Traditional AI, on the other hand, would require extensive data and instructions to be able to replicate a similar outcome, with no guarantee of achieving the same level of accuracy.
In terms of specific AI, neuromorphic computing could be applied in  
  • Probabilistic computing problems: In this case, AI has to sift through noisy and unreliable data to arrive at an end solution or output. This would significantly expand the potential use of neuromorphic computing systems into new use cases and devices in everyday life. In a test using chips from IBM, Intel, and Nvidia, neuromorphic chips were able to solve problems at scale at a significantly lower energy cost and within a competitive timeframe.
  • Ability to handle ambiguity and learn from gathered data: Traditional AI requires training using large data sets and is rules-based; this creates challenges when historical data is not available or if the data entered does not align perfectly with the rules set. Neuromorphic computing, on the other hand, is able to learn quickly, like the human brain. It is also more comfortable with handling problems such as constraint satisfaction, where the computer has to find a solution within multiple restrictions or limitations.
Doing AI calculations on devices: Currently, most smart devices such as smartphones and IoT devices have to send data to a cloud-based system for processing when handling compute-heavy tasks. This is due to limitations on the processing power that can be fitted to a portable device, as increased processing power has a negative impact on energy consumption and battery life. Neuromorphic chips are able to achieve high performance levels at a fraction of the energy cost, making them suitable to be fitted to edge devices and capable of handling compute-heavy tasks.

Potential use cases of neuromorphic computing

Drivers of neuromorphic computing

1) Ending of Moore’s Law opens the door for neuromorphic chips

Semiconductor manufacturers have followed Moore's Law since the mid-1900s when planning advances in processor technology and R&D. For context, Moore’s Law postulates that the number of transistors on a processor doubles every two years, whilst the cost of computers is halved. The law is named after Gordon Moore, one of the co-founders of Intel, who made the observation in 1965.
Since Moore made the observation, the law has held true and accurate. However, scientists expect the law to become outdated during the 2020s, due to the physical limitations on shrinking components, combined with the excessive cooling requirements of smaller processors.
Scientists have recognized the need for new processor architectures that change the way chips operate. Neuromorphic processors are expected to be a potential solution to the challenge currently faced. Specifically, due to their low energy consumption and efficient operating requirements, neuromorphic processors could continue to deliver performance benefits and help usher in a post-Moore’s Law era for semiconductors.

2) Low energy requirements of neuromorphic chips could make every edge device ‘smart’

The low energy consumption of neuromorphic computers makes them well suited for use in edge devices, including smartphones and internet-of-things (IoT) devices. Currently, these devices have to send intensive tasks to a cloud-based or centralized server, which then executes the task and sends the output back to the edge device. However, going forward, these devices could be outfitted with neuromorphic chips, which could handle the tasks locally without requiring significant energy consumption, thereby improving the speed of analysis.
While an exact saving is hard to quantify, some studies have already demonstrated that AI models running on neuromorphic computers can be 16x more efficient than traditional computers. From a financial perspective, Google was able to reduce its data center cooling costs by over 40% in 2016 by applying machine learning built on neural networks, which could easily be implemented across more companies if neuromorphic chips are commercially available.
So far, studies have shown the potential of neuromorphic computing for edge devices. This includes the benefit of faster data processing due to the on-device nature of the calculation, combined with the ability to learn faster than traditional AI and adapt to changes easier than rules-led AI.

Neuromorphic computing segmentation

Within the neuromorphic industry, the primary focus remains on the development of suitable processor chips, with the highest concentration of disruptors within this segment. This correlates with the overall state of the industry, as the development of a commercially available neuromorphic chip is the immediate next step. 
Within the segments, incumbents maintain a strong presence, primarily focusing on the hardware and software infrastructure, as these segments are still in development and are to be commercialized in the future. The most advanced incumbent development so far is Intel’s Loihi chip, with a second-generation version launched in October 2021. The chip is primarily for research purposes and is available to any researchers and developers exploring neuromorphic computing.

Neuromorphic computing market map

Neuromorphic Market Map
Source: Speeda EDGE

What are the challenges to growth?

Though the potential of neuromorphic computing is substantial, some challenges need to be overcome before it can be a true successor to traditional computing architectures. As traditional computing faces challenges due to the physics of developing smaller chips, multiple technologies have emerged that could eventually dominate the architecture of future chips, including quantum computing. For neuromorphic computing to be successful, it will need to clearly establish itself as the leading alternative to traditional computing and achieve commercialization. An example of this is shown at Intel, which is still holding on to Moore’s Law as guidance for future research efforts while developing neuromorphic chips such as Loihi. Convincing the entire computing industry to shift to neuromorphic computing is crucial to the industry’s success, as stakeholder buy-in is a critical factor in technology adoption.

Future of neuromorphic computing

Commercialization remains a key milestone for future market acceptance

The primary milestone to be achieved in neuromorphic computing is scaling the availability of commercial neuromorphic chips. Currently, there are limited players that offer chips for commercial use, with Brainchip claiming to be the first company to offer a commercially available neuromorphic processor. Other companies in this space are still at the research stage and only offer neuromorphic for research purposes, such as Intel’s Loihi and IBM’s TrueNorth chips. Going forward, the availability of commercial neuromorphic processors that can easily be adopted by developers and other solutions companies will be a key milestone in the development and growth of neuromorphic computing.

Development of tools and solutions

The ecosystem surrounding neuromorphic computing remains relatively underdeveloped, with most applications focused on neuromorphic vision solutions and limited working solutions outside of that. One example of a solutions company outside of neuromorphic vision is Koniku, which has developed a robot that is able to smell, using neuromorphic processors. 
One of the challenges faced in adoption is the translation of traditional AI algorithms to suit neuromorphic chips, as the programming is not directly compatible due to the differences in chip architecture. Additionally, the supporting tools for developing, debugging, and implementing neuromorphic-based AI remains immature when compared to the tools available for traditional AI. It is important that these tools be improved so that developers can more easily work with neuromorphic computers and bring next-gen solutions to consumers. 

Appendix: List of neuromorphic computing startups

arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.