Skip To Content
ADVERTISEMENT

Augmented Reality Comes into Focus

Three versions of the same image of a raccoon different image quality

A new kind of augmented-reality display uses a metasurface and an aspherical lens to convert an original image (top) into a projected image (center) and then refines the projected image using a neural network (bottom). [Image: Adapted from ACS Photonics 2024, DOI: 10.1021/acsphotonics.4c00989]

Scientists in China have reportedly shown how augmented-reality displays can be incorporated into eyeglasses (ACS Photonics, doi: 10.1021/acsphotonics.4c00989). Their system combines conventional lenses with a very finely patterned metasurface and uses a neural network to filter out distortions from the projected images.

Going small with metasurfaces

Augmented reality could potentially find applications in everything from architecture and navigation to emergency response and health care, by superimposing digital images on top of the real images seen by a user. However, although it is already used for head-up displays in cars, size constraints currently make the technology less suited to standard glasses. That is because multiple lenses are required to obtain a reasonable field of view, which increases the length of a device.

Several research groups are trying to shrink augmented-reality systems by exploiting metasurfaces―extremely thin and lightweight arrays of subwavelength-scale structures that modulate light's phase, amplitude and polarization. However, the small wavelength of visible light makes it tricky to fabricate such surfaces with sufficient accuracy―limiting their resolution and, with that, their field of view (to less than 3° for a single metasurface).

In the latest work, Yaoguang Ma, Zhejiang University, and colleagues have shown how to reduce the size of such a system while maintaining a wide field of view and minimal distortion. Their trick is to combine a metasurface that has an aspherical phase profile with an aspherical lens, and then use artificial intelligence to refine the projected image so that it more closely resembles the original image.

The researchers have made a device that shines green light from a micro light-emitting diode onto a silicon–nitride metasurface and then through a double-sided aspherical lens made from a synthetic polymer. The metasurface has a diameter of 3.2 mm and feature sizes as small as 100 nm, while the whole device measures just under 8 mm long.

Testing the hybrid lens

They found they could achieve excellent resolution with minimal aberrations and distortion across a 30° field of view.

To put their "meta/refractive hybrid lens" through its paces, Ma and colleagues projected the image it generated onto a white screen and analyzed point-spread functions. They found they could achieve excellent resolution with minimal aberrations and distortion across a 30° field of view. They also showed that the system could function as if it were being fed monochromatic light, even though it was subjected to light with a finite bandwidth, by carefully engineering the dispersion characteristics of both the metasurface and the aspherical lens.

The researchers then tested their device as if it were housed inside one of the arms of a pair of glasses by directing its output at the in-coupling region of an eyepiece doubling up as a waveguide. This yielded an image measuring 28 × 17 mm, which, in the form of a university logo, for example, was visible against the backdrop of objects like human hands.

However, as the researchers point out, meta/refractive hybrid lenses will always exhibit some residual aberration that is brought about by imperfections in the fabrication of the metasurface and aspherical lens, as well as misalignment between the device's various components. To try to minimize the impact of this aberration, they employed a neural network to compare the original and degraded images. The team trained the network by feeding it numerous examples of both types of image, so that it could partially restore the quality of an image projected from the device by appropriately preprocessing the image.

Ma and colleagues demonstrated the virtues of this preprocessing by comparing images of a raccoon. They found that the corrected image had a significantly higher peak signal-to-noise ratio than the uncorrected one, and it was also a closer fit to the original picture―its “structural similarity index measure” increased from 70.3% to 74.3%―with details such as the raccoon's whiskers becoming more visible.

Publish Date: 02 October 2024

Add a Comment