- Researchers demonstrate a levitating volumetric display using ultrasound waves.
- The hologram provides visual, audible and tactile 3D content.
- It uses arrays of ultrasonic transducers that produce soundwaves to float and control a tiny polystyrene bead.
Existing holographic displays can create 3D visual content without the need for glasses, but they have limited response time and persistence-of-vision capabilities.
Now, researchers at the University of Sussex and Tokyo University of Science have developed an animated 3D holograms that provide not only visual but also audible and tactile 3D content.
More specifically, they have demonstrated a levitating volumetric display using ultrasound waves. It’s called the multimodal acoustic trap display (MATD). The system traps a particle acoustically and illuminates it with RGB light to control its color.
The setup is comprised of two arrays of ultrasonic transducers that produce soundwaves to float and control a tiny, two millimeter-wide polystyrene bead.
To deliver simultaneous auditive and tactile content, MATD uses time multiplexing with a secondary trap, amplitude modulation, and phase minimization.
The bead moves at speeds of up to 3.75 m/s and 8.75 m/s in the horizontal and vertical directions, respectively, offering superior particle manipulation capabilities than existing optical or acoustic holographic techniques.
When LEDs shine red, green, and blue light on the bead, it traces out the shape of an object in three dimensions. As the bead moves, a rapidly altering LED bathes the display in light to produce colors.
Since the bead moves at speeds faster than the human eye can track, viewers see a completed three-dimensional shape. There is no optical illusion involved (which tricks the brain to see translate a 2D object into a 3D object).
And because the holographic image actually exists in 3D space, it can be viewed from all angles without quality degradation. More importantly, the technique doesn’t cause eye strain.
In addition to creating a visual effect, transducers also make the bead vibrate at frequencies that produce soundwaves. These vibrations can be tuned to create soundwaves across the whole range of hearing. This means the moving bead could generate an image of a talking face that could also work as a small speaker so the face talked too.
It is also possible to make the display tactile by creating ultrasonic soundwaves. A butterfly’s flapping wings, for example, could be felt if viewers put their hands close enough to the holographic image.
Image credit: Nature
The prototype developed in this study can produce images inside a 10 centimeter-wide cube of air. In summary, it brings us closer to volumetric displays, offering a full sensorial reproduction of virtual content.
In the near future, more powerful transducers can be made to generate larger animations and employ multiple beads, though it will be challenging to choreograph the illumination of multiple beads at once.
The technology enables positioning and amplitude modulation of acoustic traps at the sound-field frequency rate (40 kHz) and opens new doors for multimodal 3D displays. It also offers opportunities for high-speed, non-contact manipulation of matter, with applications in biomedicine and computational fabrication.