Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Algorithm Delivers Perceptual Reality to Holographic Displays

Research at Stanford University may lead to more realistic displays in virtual and augmented reality headsets. A team at the school has developed a technique to reduce a speckling distortion often seen in regular laser-based holographic displays, as well as a technique to more realistically represent the physics that would apply to the 3D scene if it existed in the real world.

The research confronts the fact that current augmented and virtual reality headsets only show 2D images to each of the viewer’s eyes, rather than 3D or holographic images.

“They are not perceptually realistic,” said Gordon Wetzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab.

Photograph of a holographic display prototype. Courtesy of the Stanford Computational Imaging Lab.

Image quality for existing holographic displays has been limited. It has been a challenge to create a holographic display that is on par with LCD display quality, Wetzstein said. It is difficult to control the shape of lightwaves at the resolution of a hologram. The inability to overcome the gap between what is happening in a simulation and what the same scene would look like in a real environment has also hindered advances.

Scientists have tried to create algorithms to address both problems. Wetzstein and his colleagues previously developed algorithms using neural networks — an approach called neural holography.

“Artificial intelligence has revolutionized pretty much all aspects of engineering and beyond,” Wetzstein said. “But in this specific area of holographic displays or computer-generated holography, people have only just started to explore AI techniques.”

In the current work, postdoctoral research fellow Yifan Peng, co-lead author of the research papers, helped design an optical engine to go into the holographic displays. The team’s neural holographic display involved training a neural network to mimic the real-world physics of what was happening in the display, and was able to generate real-time images. The team then paired it with an AI-inspired algorithm to provide an improved system for holographic displays that use coherent light sources — LEDs and SLEDs. These sources are favored for their cost, size, and energy requirements, and they also have the potential to avoid the speckled appearance of images produced by systems that rely on coherent light sources such as lasers.

However, the same characteristics that help partially coherent light sources to avoid speckling tend to result in blurred images with a lack of contrasting. By building an algorithm specific to the physics of partially coherent light sources, the researchers produced the first high-quality and speckle-free holographic 2D and 3D images using LEDs and SLEDs.

The research was published in Science Advances (www.doi.org/10.1126/sciadv.abg5040) and in a paper to be presented at SIGGRAPH ASIA 2021.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media