Depth Sensors In Self-Driving Car Are Now 1000 Times Better

  • Researchers develop a new approach that enhances the resolution of time-of-flight depth sensors thousand fold.
  • It uses ideas from interferometry and LIDAR that allow to capture things at higher resolution.
  • The system has a depth resolution of 3 micrometers at the range of 2 meters.

The evolution of self-driving car is here, and big players like Google and Tesla are rising to the forefront. However, the technology has been associated with many safety concerns. For instance, what algorithms would do if they suddenly detect an animal in the front of the moving car, would they care for animal or save you first? Also, self-driving cars can’t function properly in heavy rains, calling into question what role drivers might have to play in the event the technology fails.

A recent research from MIT tries to solve a few of the problems that comes with self driving technology. They have developed a new method for accurately measuring distances through fog, which is several times better than today’s sensor technology.

The new depth sensors combined with effective computational method enhance the resolution of time-of-flight depth sensors thousand fold. That’s the type of resolution that can easily detect objects in fog and make self-driving cars safer.

Vision Range

The existing technology is capable enough for Intelligent Parking Assist System (IPAS) and collision detection systems. They have a depth resolution of 1 centimeter at a range of 2 meters. The resolution exponentially decreases as the range increases. In the worst case, this could even lead to loss of life.

The new time-of-flight system has a depth resolution of 3 micrometers at the same range of 2 meters. The lead researcher, Achuta Kadambi, carried out some tests, in which he transmitted a light signal via half kilometer optical fiber (with uniform space filters) to simulate the power reduction experienced over prolonged distances. He discovered that the system can still achieve only 1 centimeter of depth resolution at a range of half kilometer.

How It Works?

The two factors that determines the system resolution are light-burst length and detection rate.

A very short burst light is fired and camera calculates the time light takes to return. The time tells how far the object is.
The detection rate refers to the modulator that turns a light beam on and off. Existing detectors can perform approximately 100 million calculations per second, which limits the system to centimeter-depth-scale resolution.

The new system uses ideas from interferometry and LIDAR that allow to capture things at higher resolution.

Interferometry refers to splitting of a light beam in two equal parts, where one part is fired into a scene while another is kept circulating locally. The reflected beam is then merged with the locally circulating one. The phase difference of these two beams reveals the precise distance of the object.

LIDAR (Light Detection and Ranging) technology, on the other hand, enables slow cameras to image high-frequency (GHz bandwidth signals) data.

Cascaded LIDAR using Beat Notes

Time-of-flight imager of human scans at 1 GHz, 500 MHz and 120 MHz

Reference: MIT Media Lab | 10.1109/access.2017.2775138

Beat notes are usually low-frequency sound that can be detected by low-bandwidth equipments. For example, if one device is producing 330 Hz pitch and the other producing 300 Hz, the difference frequency, i.e. 30 Hz is the beat note.

The same concept is applied to modulated light beams, where interference of two beams (in GHz) results in Hz-frequency beat note. The beat contains all the essential data to gauge distance.

Basically, it is like switching a flashlight off and on millions of times in one second, but it’s done electronically, rather than optically.

The lower-frequency systems can work properly in fog because it scatters light. Since phase shift is much higher relative to the signal frequency in GHz optical systems, they are far better at compensating for fog as compared to MHz systems.

Other Applications

Read: Electric Cars Powered By Solid State Battery Can Go 500 Miles By 2024

The researchers have shown path-length control at minimum of nearly 3 micrometers, which is around 1/10th the width of a human hair. Such high preciseness could enable inversion of scattering, letting doctors and medical researchers see deeper into the tissue using spectrums of visible light.

Robots could navigate through an orchard, instead of just mapping out the topology. And yes, most of the potential wide range of implementations would be realtime.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply