- A new method would remove the visual distortions in 3D displays without additional heavier optics.
- The system creates a clear, 3D image in space which makes it suitable for AR applications.
We mostly interact with digital content through keyboards and 2D touch panels. However, technologies like Virtual Reality(VR) and Augmented Reality(AR) are nowadays promising more freedom from these restrictions.
VR/AR devices have their own disadvantages, for example, a tendency to induce eye strain, dizziness, and motion sickness due to their stereoscopy-based designs. Longer usage of these devices could increase of feeling of nausea and distortion, which is also known as VR sickness.
To overcome these limitations, researchers in Belgium and Japan have started exploring a combination of holography and light field technology. Although this requires additional equipment, they have tried to keep the size and cost low so that it could achieve commercial success.
How It Works?
The characteristics of objects (such as size, color, texture, height, distance) are defined by the light scattered by them in different directions at different intensities. The human eyes see these modulated rays and send signals to the brain where those characteristic features are recreated.
True 3D displays like holography and light-field display devices can generate the same modulated rays without the presence of an actual object. However, accurately reconstructing all of the object’s features is an expensive process.
That’s why researchers first computed the required modulation and then transformed the data into light signals using an LCD. These signals are further fed to other optical instruments like beam combiners, mirrors, and lenses.
They developed a holographic optical element with a thin layer of photosensitive element that is capable of replicating jobs of several optical modules. It’s mostly made of glass that determines the quality and performance of the display.
To record/print several optical components in one go, the team developed a method called Digitally Designed Holographic Optical Element (short for DDHOE). This method can record all characteristic features of different optical components, without requiring actual components to be physically present there.
Source: The Optical Society
Basically, the aim is to measure the hologram of all components’ features and optically recreate them together using a laser and LCD. The final optical signals resemble the same light modulated by all actual components together. The recorded hologram is finally projected on the thin sheet of photosensitive material.
(a) DDHOE lens array, (c) computer generated 3D scene, (d) Final 3D image | Credit: Boaz Jessie Jackin
The team has already tested this technique on a head-up light field 3D display. Since it is a see-through system that outputs 3D pictures/videos, the technology could have a variety of applications in AR.
To display multi-view pictures on a glass (microlens array film), the system uses a conventional 2D projector. This thin film modulates the light coming from the projector and recreates the image in 3D in space.
How It’s Different From Other Method?
In traditional techniques, light from projector scatters before striking the micro-lens array. This distorts the final 3D image in space. To fix this, you need to collimate the projector lights into a parallel beam.
However, if you want a bigger display, you have to increase the size of collimating lenses, which raises the cost of components – the main reason these techniques haven’t achieved any commercial success.
The new method, on the other hand, incorporates the collimation functions on the micro-lens array itself by fabricating it with DDHOE. This eliminates the need for heavier collimation optics. Researchers believe that their technique will soon replace the existing models that use bulky optic components.