“Why, sometimes I’ve believed as many as six impossible things before breakfast.” ― Lewis Carroll, Alice in Wonderland
You can’t beat the human brain.
As yet.
This 1.5 kg organic powerhouse that processes external stimuli, coordinates bodily movement, regulates human emotions, and controls vital functions while delicately managing internal body temperature and heartbeat all at the same time is going to be very, very challenging to replicate or even emulate, as computer scientists working on neuromorphic computing are just discovering.
Using just 20 watts of power this critical human organ composed of fats (60%) and water, protein, carbohydrates, and salts (40%) is capable of performing an exaflop (a billion billion) mathematical calculations per second while the most efficient computers of today use 10,000 times more energy than the brain for seemingly simple tasks like image processing and recognition.
Yes, the human brain is a bustling beast of a machine, thinking of as many as six impossible things before breakfast, figuring out mundane things like what to cook for dinner while working on complicated tasks such as a proposed solution to the Riemann problem, so it’s no surprise that scientists have turned to its intricately complex yet energy-efficient design to find a workable solution for computing as the von Neumann bottleneck begins to choke AI computation and traditional computing needs to pick up and move on after the dual obsolescences of Dennard scaling and Moore’s Law (in a few years from now).
Neuromorphic computing isn’t the only computing paradigm scientists are working on to see us through the future. Quantum computing is also evolving side-by-side, though it consumes more energy and has accuracy issues. As the future of computing unfolds, let’s unpack what neuromorphic computing has to offer for those working in AI and its associated fields.
How Does Neuromorphic Computing Work?
Neuromorphic computing attempts to replicate the neocortex of the brain, also known as the six-layered cortex, isocortex, or neopallium. The neocortex is thought to be responsible for higher-order functions like sensory perception, spatial reasoning, motor commands, and language processing and is made up of a complex network of neurons that facilitate complex thinking and help to process raw information.
Likewise, neuromorphic computing tries to replicate the efficiency of the neocortex by making use of a Spiking Neural Network. A Spiking Neural Network is the hardware version of an Artificial Neural Network and works on the idea of a “spike”.
Biological neurons communicate with each other through the basic information unit of a neuron known as a spike which is an electrical impulse responsible for passing information from one neuron to another. Similarly, SNNs also make use of a discrete electrical signal to transfer information from one neuron to another.
Neuromorphic computing devices replicate the brain’s spiking behavior. Rather than using traditional logic gates, their computational units—called “neurons”—emit electrical pulses, or spikes, to communicate. When a spike reaches a certain threshold at another neuron, it activates that neuron in response. Neurons can generate spikes independently of a central clock, allowing for highly parallel operations.
There are several models that try to replicate the neocortex functioning into a computational model. Models like the Hodgkin-Huxley Model, the Leaky Integrate and Fire Model, the Izhikevich Model, and the Adaptive Exponential Integrate-and-Fire (AdEx) Model are some of the basic blueprints for creating a functioning SNN.
The chief drawback here is that creating a useful SNN depends on fully understanding the functioning of a human brain in order to replicate it accurately. Unfortunately, the functioning of the human brain is yet to be understood clearly, consequently limiting progress in the field of neuromorphic computing.
Exploring Neuromorphic Hardware and Software
Neuromorphic computing has advanced significantly thanks to early developments like Stanford University’s Neurogrid, which allowed real-time simulation of millions of neurons and billions of connections. Other research efforts, such as IMEC’s self-learning chip and the European Union’s Human Brain Project, pushed the field forward by supporting brain-inspired computing. Large-scale projects like SpiNNaker, a real-time multi-core digital chip system, and BrainScaleS, an accelerated analog neuron-synapse model, represent major milestones in neuromorphic hardware.
Tech companies that are working with neuromorphic processors include Intel’s Loihi, GrAI Matter Labs’ NeuronFlow, and IBM’s TrueNorth and NorthPole. Most of these chips use traditional silicon and CMOS technology, though there’s growing interest in alternative materials like ferroelectric and phase-change materials. Memristors, nonvolatile memory elements, offer a way to combine memory and processing, helping to emulate the behavior of biological neurons.
On the software side, neuromorphic computing often relies on algorithms that support unique ways of training and processing information. Techniques such as converting deep neural networks to spiking neural networks help map information efficiently. Evolutionary algorithms use concepts like mutation and selection to optimize network parameters, while graph-based structures represent spiking neural networks as interconnected nodes that reflect timing relationships between neurons.
Methods like synaptic plasticity, inspired by the brain’s ability to adapt, adjust neuron connection weights based on spike timing. Another method, reservoir computing, employs recurrent neural networks as “reservoirs” that map data into higher-dimensional space. Here, an untrained spiking neural network with complex internal feedback helps process information through its network’s natural dynamics, enhancing neuromorphic computing’s real-world applications.
Neuromorphic Computing Is Going To Power AI Through The 3rd Wave
The von Neumann architecture is not going to be able to support AI development given that the scale of AI computations doubles every three to four months.
At present, neuromorphic computing is being aggressively developed to bring AI features to our smartphones at energy-efficient costs. Neuromorphic chips use less energy and are able to deliver better performance because individual transistors are interconnected in a way that makes information passing very efficient.
It’s not just AI that is going to benefit from neuromorphic computing, but also IoT, the wearables industry, sensors, transportation, space exploration, defense and manufacturing. Finally, neuromorphic chips will aid the robot revolution helping robots recognize and interact with their environment independently at low energy consumption.
Wrapping Up
Neuromorphic computing started in the 1980s with Carver Mead’s work on the analog cochlea and the silicon retina. Eventually, neuromorphic computing is going to veer towards developing consciousness in computers.
What’s going to happen to the human brain then? Is the very thing that invented neuromorphic computing going to become a vestigial organ?
Time to rack those brains real hard.