Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Deep Learning-Trained Neural Network Reconstructs OCT Images

Facebook Twitter LinkedIn Email
LOS ANGELES, Aug. 12, 2021 — A team of UCLA and University of Houston (UH) scientists, led by Aydogan Ozcan in collaboration with Kirill Larin, used deep learning to train a neural network to rapidly reconstruct OCT images using undersampled spectral data. Although the deep-learning-based image reconstruction method was given significantly less spectral data than standard image reconstruction methods, it was able to reconstruct high-quality images without any spatial artifacts.

When undersampled spectral data is used with standard image reconstruction methods, it typically results in severe spatial artifacts in the reconstructed images.

To demonstrate the efficacy of the deep-learning-based framework for OCT imaging, the researchers trained and blindly tested a deep neural network using mouse embryo samples imaged by a swept-source OCT system. They also tested their approach on several types of human samples. A single image reconstruction network was trained with the tissue types, where one sample for each type was reserved for blind testing. In the test phase, the network consistently achieved high-quality image reconstructions.

Using twofold undersampled spectral data (640 spectral points per A-line), the trained neural network reconstructed 512 A-lines in 0.59 ms while running on multiple GPUs. The neural network removed spatial artifacts due to undersampling and the omission of spectral data points.

The trained network produced a good match to images of the same samples reconstructed using the full spectral OCT data (1280 spectral points per A-line).

Deep learning improves image reconstruction in optical coherence tomography using significantly less spectral data. Courtesy of the Ozcan Lab at UCLA.
Deep learning improves image reconstruction in OCT using significantly less spectral data. Courtesy of the Ozcan Lab at UCLA.
The team further showed that its approach could be extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× undersampled spectral data. The researchers also demonstrated an A-line-optimized undersampling method, which they created by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network. This improved the overall imaging performance by using fewer spectral data points per A-line.

As a framework, the deep-learning-based image reconstruction method does not require any hardware changes to the user’s optical setup, and it can be integrated with existing OCT systems to speed up the image acquisition process. Although the researchers demonstrated their approach using a swept-source OCT system, they said that their framework for OCT image reconstruction can also be used in various spectral-domain OCT systems that acquire spectral interferometry data for 3D imaging of samples.

“These results highlight the transformative potential of this neural network-based OCT image reconstruction framework, which can be easily integrated with various spectral domain OCT systems, to improve their 3D imaging speed without sacrificing resolution or signal-to-noise of the reconstructed images,” Ozcan said.

The research was published in Light: Science & Applications (
Aug 2021
The study and utilization of interference phenomena, based on the wave properties of light.
Research & TechnologyeducationAmericasUCLAAydogan OzcanimagingOCTneural networkdeep learningimage reconstructionspectral dataopticsBiophotonicsinterferometrymedicaldata processing

back to top
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.