Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Deep Learning-Trained Neural Network Reconstructs OCT Images

A team of UCLA and University of Houston (UH) scientists, led by Aydogan Ozcan in collaboration with Kirill Larin, used deep learning to train a neural network to rapidly reconstruct OCT images using undersampled spectral data. Although the deep-learning-based image reconstruction method was given significantly less spectral data than standard image reconstruction methods, it was able to reconstruct high-quality images without any spatial artifacts.

When undersampled spectral data is used with standard image reconstruction methods, it typically results in severe spatial artifacts in the reconstructed images.

To demonstrate the efficacy of the deep-learning-based framework for OCT imaging, the researchers trained and blindly tested a deep neural network using mouse embryo samples imaged by a swept-source OCT system. They also tested their approach on several types of human samples. A single image reconstruction network was trained with the tissue types, where one sample for each type was reserved for blind testing. In the test phase, the network consistently achieved high-quality image reconstructions.

Using twofold undersampled spectral data (640 spectral points per A-line), the trained neural network reconstructed 512 A-lines in 0.59 ms while running on multiple GPUs. The neural network removed spatial artifacts due to undersampling and the omission of spectral data points.

The trained network produced a good match to images of the same samples reconstructed using the full spectral OCT data (1280 spectral points per A-line).


Deep learning improves image reconstruction in OCT using significantly less spectral data. Courtesy of the Ozcan Lab at UCLA.
The team further showed that its approach could be extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× undersampled spectral data. The researchers also demonstrated an A-line-optimized undersampling method, which they created by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network. This improved the overall imaging performance by using fewer spectral data points per A-line.

As a framework, the deep-learning-based image reconstruction method does not require any hardware changes to the user’s optical setup, and it can be integrated with existing OCT systems to speed up the image acquisition process. Although the researchers demonstrated their approach using a swept-source OCT system, they said that their framework for OCT image reconstruction can also be used in various spectral-domain OCT systems that acquire spectral interferometry data for 3D imaging of samples.

“These results highlight the transformative potential of this neural network-based OCT image reconstruction framework, which can be easily integrated with various spectral domain OCT systems, to improve their 3D imaging speed without sacrificing resolution or signal-to-noise of the reconstructed images,” Ozcan said.

The research was published in Light: Science & Applications (www.doi.org/10.1038/s41377-021-00594-7).

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media