Neuromorphic Computing

Reinventing how computers think based on biological neural networks

Overview

Neuromorphic computing can be described as the designing of computers that mimic the way the human brain operates, using a combination of artificial neuron networks instead of using a tradition computer processing unit for calculations. The key benefit of this technology is the lower power requirements, faster processing speed, and smaller space required for the hardware component. In terms of use cases, the primary use case is the application of neuromorphic computers for big data analytics and machine learning, as neuromorphic computers are able to learn faster than tradition computers. It is expected that neuromorphic computing would bring a third wave of AI into play, where AI will more closely operate in a manner similar to the human brain compared to current AI models and technology.

Note: Additional sections (such as market sizing, detailed overview, and incumbents) can be provided on request.

Use cases


The low power consumption and the natural and efficient way of processing information has paved the way for various use cases for neuromorphic computing. Applications that require large volumes of processing, such as computer vision, have been a natural fit for this industry, with use cases such as surveillance, visual inspections ranging from defects in the production line to aquaculture, and space debris management. Process automation across work, using neuromorphic-based AI, is also an emerging area with multiple applications across process optimization and automation.

We have identified key neuromorphic computing use cases below:

The Disruptors


Funding History

Notable Investors


?
Funding data are powered by Crunchbase
arrow
menuarrow
close

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo

Overview

What is neuromorphic computing?

Simply put, neuromorphic computing is a form of computing that most closely resembles the human brain in terms of how calculations are performed. For comparison, traditional computing follows the von Neumann architecture, which was first published by John von Neumann in 1945 in the “First Draft of a Report on the EDVAC.” 
A key feature of any von Neumann-based computer is that the fetching of instructions and data operations cannot happen at the same time due to the separation of memory and the central processing unit (CPU). Any data operation must pass through ‘the bus’ (the communication system that transfers data between components) between the CPU and memory, as shown in the diagram below. This limitation is known as the von Neumann bottleneck and impacts the performance of computers today as well as the programming of software, since programs have to adjust their coding to improve the efficiency of data transfer through the bus.

Von Neumann architecture vs. the human brain vs. Neuromorphic computing

Neuromorphic - Comparison with Von Neumann and the brain
Source: Photonix, IBM
Meanwhile, the human brain uses neural networks, which consist of a series of neurons that are inter connected which exchange data via electrical signals. These signals are passed around the network via synapses (the contact points between neurons). Unlike von Neumann architecture, neural networks can analyze the inputs received at the neuron level and adjust the output sent accordingly, without the need for a connecting bus between CPU and memory. This has a significant performance benefit in terms of the speed of analysis and the ability to handle more complex tasks. Additionally, neural networks are able to handle data in more complex forms than binary (1 and 0), including numbers (0–9) and the alphabet (A–Z). 
Neuromorphic computing seeks to replicate the human brain through artificial neural networks (also known as deep neural networks). These networks are typically designed based on the Rosenblatt perceptron (a probabilistic model for information storing in the human brain), with three key layers. The input layer receives the data, which is then analyzed and passed through the hidden layer/s. This determines which electrical signals to pass through based on the logic that the network has been trained on. The output layer provides the answer of the operation.

Neuromorphic computing vs. quantum computing

Click here to learn more
Get a demo

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.