Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Deep Learning Enables Structured Light 3D Polarimetric Imaging

Researchers at Nanjing University of Science and Technology demonstrated a dual-frequency multiplexing fringe projection profilometry (FPP) technique that is enabled by deep learning. The researchers said their approach to FPP, which is a noncontact measurement technique for 3D imaging, enabled single-shot, unambiguous, high-precision, structured light 3D imaging.

As part of the demonstration, the researchers showed a deep neural network that was trained to directly recover the absolute phase from a unique fringe image that involves spatially multiplexed fringe patterns of different frequencies. They performed experiments on both static and dynamic scenes, and they showed that the approach is robust to object motion and could be used to obtain high-quality 3D reconstructions of isolated objects within a single fringe image.

The researchers’ technique specifically supports the ability to perform 3D shape reconstruction, filling a so-called gap that exists between 3D imaging and 2D sensing to deliver high-precision 3D image reconstruction using only one pattern as part of a structured light 3D imaging system. In FPP, a projector projects a series of fringe patterns onto a target, and the camera captures images that are modulated and deformed by the object. From the captured fringe patterns, phase information of the object can be algorithmically extracted via Fourier transform and phase shifting methods, for example.

FPP is widely used in optical metrology due to its noncontact, high-resolution, high-speed, and full-field measurement capabilities. These features are advantageous in intelligent manufacturing, reverse engineering, industrial inspection, and heritage preservation.

The researchers constructed two parallel U-shaped networks with the same structure. One network used a dual-frequency composite fringe image as the network input, which is combined with the traditional phase-shifting physical model. This combination is used to predict the sine and cosine terms to calculate the wrapped phase map.

The researchers designed the other network to predict the fringe order information from the input dual-frequency composite fringe image.

A neural network trained on data sets was then used to “de-multiplex” high-resolution, spectrum-crosstalk-free phases from the composite fringe to reconstruct a high-accuracy absolute phase map that supports the imaging process.

Traditional multifrequency composite methods cannot guarantee single-frame high-accuracy 3D imaging, and the researchers said that the work addresses this bottleneck. Additionally, it opens opportunities for single-shot, instantaneous 3D shape measurement of discontinuous and/or mutually isolated objects in fast motion.

The team plans to explore more advanced network structures and integrate more suitable physical models into deep learning networks.

The work was supported by the National Natural Science Foundation of China and received additional funding.

The research was published in Opto-Electronic Advances (www.doi.org/10.29026/oea.2022.210021).

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media