Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Optical Processor Captures Scenes in Spatially Incoherent Light

Facebook Twitter LinkedIn Email
LOS ANGELES, Aug. 22, 2023 — A research team led by professor Aydogan Ozcan at the University of California, Los Angeles (UCLA) developed a deep-learning-based approach to designing spatially incoherent, diffractive optical processors. The method provides a way to build all-optical visual processors that work under natural light. Following deep learning, the diffractive optical processors can transform any input light intensity pattern into the correct output pattern.

The researchers believe that their design for diffractive optical processors will have broad application, in addition to contributing to the quest for a fast, energy-efficient alternative to electronic computing for future computing needs.

Since natural lighting conditions typically involve spatially incoherent light, the ability to process visual information under incoherent light is crucial for applications that require ultrafast processing of natural scenes, like autonomous vehicles. The capability to process information under incoherent light is also useful for high-resolution microscopy applications that include spatially incoherent processes such as fluorescence light emission from samples.

The diffractive optical processors are made from structurally engineered surfaces that can be fabricated using lithography or 3D-printing techniques. The structured surfaces use the successive diffraction of light to perform linear transformations of the input light field without using external digital computing power.
Universal linear intensity transformations using spatially incoherent diffractive processors. Courtesy of the Ozcan Lab at UCLA.
Universal linear intensity transformations using spatially incoherent diffractive processors. Courtesy of the Ozcan Lab at UCLA.

The researchers used numerical simulations and deep learning, administered through examples of input-output profiles, to demonstrate that, under spatially incoherent light, the diffractive optical processors can be trained to perform any arbitrary linear transformation of time-averaged intensities between the processor’s input and output fields of view.

The researchers designed spatially incoherent diffractive processors for the linear processing of intensity information at multiple illumination wavelengths operating simultaneously. They demonstrated that using spatially incoherent broadband light, it is possible to simultaneously perform multiple linear intensity transformations, with a different transformation assigned to each spatially incoherent illumination wavelength.

Additionally, the researchers numerically demonstrated a diffractive network design that performed all-optical classification of handwritten digits under spatially incoherent illumination, achieving a test accuracy of greater than 95%.

The team’s numerical analyses showed that phase-only diffractive optical processors with shallow architectures — for example, processors that have only one trainable diffractive surface — are unable to accurately approximate an arbitrary intensity transformation, irrespective of the total number of diffractive features available for optimization. The researchers further found that, in contrast, phase-only diffractive optical processors with deeper architectures — for example, processors with one diffractive layer following others — can perform an arbitrary intensity linear transformation using spatially incoherent illumination with negligible errors.

These findings can be used to build all-optical information processing and visual computing systems that use spatially and temporally incoherent light, for example, to visualize natural scenes. Diffractive optical processors also have the potential to support applications in computational microscopy and incoherent imaging that feature spatially varying engineered point spread functions.

The research was published in Light: Science & Applications (
Aug 2023
A sub-field of photonics that pertains to an electronic device that responds to optical power, emits or modifies optical radiation, or utilizes optical radiation for its internal operation. Any device that functions as an electrical-to-optical or optical-to-electrical transducer. Electro-optic often is used erroneously as a synonym.
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
Research & TechnologyeducationAmericasUCLAUniversity of California Los Angelesimaginglight sourcesMicroscopyopticsoptoelectronicsSensors & Detectorsdeep learningautomotiveBiophotonicsmachine visionincoherent lightdiffractive optical processorsdiffractive optical networkslight diffractionspatial incoherence

back to top
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.