- A new memristor-based in-memory computing device can solve partial differential equations better than existing supercomputers.
- Researchers demonstrated a simulation of a plasma reactor with better throughput and power efficiency.
You may have heard of an electric component called memristor: it’s a non-volatile module that controls the flow of electric current in a circuit. It turns out arranging these modules in a new way on a silicon chip could enable them to be utilized as a general computer with much lesser (nearly 100 times) energy consumption. This would make supercomputers more efficient and enhance the performance of low power devices like smartphones.
Over the last five decades, semiconductor manufacturers have consistently improved hardware performance. Although the processors and memories are extremely fast, they couldn’t able to make them truly efficient: the data still need to wait to come in and out.
To overcome this limitation, one could use memristors (named by combing two words: memory and resistor). As the name suggests, it enables processor and memory unit in the same devices and can be configured to have several states, which eliminates the data transfer overhead experienced by traditional computers.
Unlike binary bits (0 and 1), memristors have resistances that are in a sequence. This could be beneficial for applications like artificial neural networks, that leverage the memristors’ analog nature. Conventional computers, on the other hand, cannot precisely differentiate among tiny variations in the electric current passing through a memristor.
Now, researchers at the University of Michigan have developed a memristor-based in-memory computing device that can overcome the precision-limitations of existing computers.
What Exactly Did They Do?
An array of memristor integrated on a circuit board | Credit: University of Michigan
They digitized the current output format by assigning current ranges a particular bit values (1 or 0). Also, they successfully mapped big mathematical equations into smaller chunks of an array, enhancing both the flexibility as well as the performance of the system.
The new chunks, termed memory-processing units can efficiently implement data-intensive tasks and AI algorithms. They also work well for large matrix operations, including weather forecast simulations.
A simple matrix (two-dimensional table with column and rows) can be directly mapped onto memristors’ grid. The memristors can perform multiplication and addition of the rows and columns concurrently, with a specific sequence of voltage pulses along the rows. To obtain the answers, you just need to measure the current at each column’s end.
Reference: Nature Electronics | doi:10.1038/s41928-018-0100-6 | University of Michigan
Whereas, a conventional processor reads the value of all cells in the matrix, performs multiplication and finally adds up each column in order.
To demonstrate the system, the researchers solved partial differential equations for a 32*32 memristor array (represents a single block of the future system). These equations are ubiquitous in engineering and scientific research, and are very difficult to solve.
The Memristor array viewed via electron microscope | Credit: University of Michigan
While it’s quite impossible to completely solve partial differential equations, a supercomputer can find an approximate solution. Usually, these problems include a huge data matrix, thus processor-memory communication issues can be effectively handled with a memristor array.
Read: The World’s Coldest Nanoelectronic Chip | 2.8 milliKelvin
The team verified the performance of the system in real-world problems, where memristor-based partial differential equation solver is used as a workflow in plasma hydrodynamics (like those used for fabricating circuits) simulator. They got reliable outcomes comparable to traditional digital differential equation solver, with better throughput and power efficiency.