- Magnets can help artificial intelligence achieve human-like efficiency at recognizing objects.
- Researchers develop new networks that use less energy and memory to carry out tasks similar to brain computations.
The electrical dynamics of neurons is quite similar to the switching dynamics of a nanomagnet. The switching behavior exhibited by magnetic tunnel junction equipment is stochastic in nature. Since this behavior represents a neuron’s sigmoid switching behavior, the magnetic junction can be utilized to store synaptic weights.
Using this exceptional property of magnets, the researchers of Purdue University have developed a method that could help artificial intelligence (AI) powered robots achieve human-like efficiency at recognizing objects.
The method involves merging magnetics with brain-like networks to teach machines like drones, self-driving cars, and robots to better generalize about several objects.
A New Algorithm
Spiking Neural Networks (SNNs) provides a promising alternative toward realizing intelligent neuromorphic systems, which need lower computational resources than conventional neural networks. These networks encode and transmit data in the form of sparse spiking events.
In this study, researchers used spike timing dependent plasticity (STDP) to develop a new stochastic training algorithm called Stochastic-STDP. It is a deep residual convolutional SNN, termed ReStoCNet, composed of binary kernels for memory-efficient neuromorphic computing.
Reference: Frontiers | doi:10.3389/fnins.2019.00189 | Purdue University
Using magnet’s intrinsic stochastic behavior, researchers switched the magnetization phase stochastically based on the new algorithm. They then used trained synaptic weights during inference, which were deterministically encoded in the magnetization phase of the nanomagnet.
The STDP-based probabilistic learning rule incorporates Hebbian and anti-Hebbian learning approaches, to train the binary kernels comprising of ReStoCNet in a layer-wise unsupervised way for hierarchical input feature extraction.
Credit: Purdue University
The team used high-energy barrier magnets to enable compact stochastic primitives and make it possible to use the same device as a stable memory element.
They validated the efficiency of ReStroCNet on two different publicly available datasets, and showed that residual connections allow deep convolutional layers to learn valuable high-level input features and minimize the loss incurred by SNNs without residual connections.
How Is It Useful?
The new network is capable of representing both neurons and synapses while reducing the amount of energy and memory required to carry out tasks similar to brain computations.
These brain-like networks can solve difficult optimization problems, such as graph coloring and traveling salesman problem. The stochastic devices presented in this work can function as ‘natural annealer’ and help algorithms move out of local minima.
Read: Light Act As A Magnet In A New Quantum Simulator
More specifically, ReStoCNeT with memory-efficient probabilistic learning and event-driven computing is well-suited for implementing neuromorphic hardware based on CMOS and stochastic emerging device technologies such as Phase-Change Memory, Resistive Random Access Memory, which enhance memory efficiency in battery-powered devices.