- Researchers build a new system that can solve a quintessential discrete optimization problem, called MaxSAT.
- This analog solver performs better than digital computers, and can be extended to other optimization problems as well.
Today’s digital computers perform most tasks well. They are perfect for certain computations, word processing, web surfing, and graphic arts. But since they rely on binary code — 0s and 1s — they are not ideal for solving all problems.
Digital computing has almost reached its maximum potential, and that’s why some mathematicians have started taking interest in reviving analog computing. It can help advance computation beyond the digital framework.
Recently, researchers at the University of Notre Dame and Babes-Bolyai University, Romania, developed a new analog solver that can evaluate the best solutions of NP-hard problems.
NP-hard problem means there is no algorithm that can solve the problem in polynomial time. The time required to get to the solution increases exponentially with problem size. Usually, these problems are associated with medical imaging, bioinformatics, protein folding, and scheduling.
Researchers tested their analog solver on a wide range of NP-hard problems, and they found that this new method has the potential to lead to better solutions in less time.
Why Analog Computing?
Analog computers were extremely popular in the mid-20th century. Every big administration and company concerned with problems in dynamics had a giant analog computing center. They were used to launch rockets into space, guide weapons on battleships, and simulate aircraft dynamics.
Unlike digital computers, analog computers use non-discrete data like voltage, weight, speed, temperature, and pressure. And since they use continuous values, they are immune to quantization noise.
Analog computers can be designed to solve a variety of problems. They can directly perform mathematical operations. For example, to subtract 8 from 3, analog computers subtract voltages that correspond to those values, and then immediately provide the correct output.
They can be used for real-time operations and simultaneous computation. In case of analog issues, they can provide the insights of the problems and errors. And since they do not require quantization, they are perfect of signal modulation/demodulation and high-speed motor control.
However, digital computers took over the market in the 1980s. They were sufficiently flexible, fast and more accurate in performing general tasks. As efficient algorithms came along, their performance got even better.
A vintage analog AMF665D computer | Image credit: Francis Massen / YouTube
But digital computers, including the modern ones, can’t solve NP-hard problems with large variables. The difficulty with most optimization problems is you can’t determine whether the solution(s) is optimal. Ensuring that there isn’t any better solution is just as difficult as the problem itself.
Analog Solver With High Performance
The new continuous-time dynamical system can solve a quintessential discrete optimization problem, called MaxSAT. The method relies on a deterministic set of ordinary differential equations and a heuristic technique for predicting the likelihood that the optimal solution has been evaluated by analog time t.
In analog circuits the von Neumann bottleneck is eliminated: the circuit itself acts as processor and memory. Implementing the approach on digital computers, on the other hand, requires the use of ordinary differential equations integrator algorithm that discretizes the continuous-time equations and solves them step by step while handling errors.
In digital form, the solver doesn’t perform efficiently because the dynamics evolve several thousands of coupled ordinary differential equations, which is a time-consuming integration process.
And since the approach uses general characters, it can be extended to other optimization problems as well. The researcher plan to design and build devices based on this new approach.