Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics Spectra BioPhotonics EuroPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook
More News

Ozcan Group Improves Inference Accuracy for All-Optical Diffractive Neural Networks

Facebook Twitter LinkedIn Email Comments
In new research, scientists from the lab of professor Aydogan Ozcan at UCLA have demonstrated distinct improvements to the inference and generalization performance of diffractive optical neural networks.

The researchers demonstrated a differential detection scheme where each class is assigned to a separate pair of photodetectors, behind a diffractive optical network. The class inference is made by maximizing the normalized signal difference between the photodetector pairs.

Using this scheme, which involved 10 photodetector pairs behind five diffractive layers with a total of 0.2 million neurons, the researchers achieved blind testing accuracies of 98.54%, 90.54%, and 48.51% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 data sets, respectively.

Advances in all optical diffractive neural networks, Ozcan Group at UCLA. SPIE.

Operation principles of a differential diffractive optical neural network. Since diffractive optical neural networks operate using coherent illumination, phase and/or amplitude channels of the input plane can be used to represent information. Courtesy of SPIE.

The researchers reduced the cross-talk and optical signal coupling between the positive and negative detectors of each class by dividing the optical path into two jointly trained diffractive neural networks that work in parallel. Using this parallelization approach, they divided individual classes in a target data set among multiple jointly trained diffractive neural networks.

Using a class-specific differential detection in jointly optimized diffractive neural networks operating in parallel, the team’s simulations achieved blind testing accuracies of 98.52%, 91.48%, and 50.82% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 data sets, respectively, coming close to the performance of some of the earlier generations of all-electronic deep neural networks.

Additionally, the researchers independently optimized multiple diffractive networks and used them in a way that is similar to ensemble methods practiced in machine learning.

The advancement of diffractive optical neural network technology could make it possible for neural networks to recognize target objects more quickly and with significantly less power than standard computer-based machine learning systems. Ultimately, this could provide advantages for autonomous vehicles, robotics, and defense applications. The researchers believe that these latest systematic advances, in diffractive optical network designs in particular, have the potential to advance the development of next-generation, task-specific, intelligent computational camera systems.

The research was published in Advanced Photonics, a publication of SPIE, the international society for optics and photonics (https://doi.org/10.1117/1.AP.1.4.046001).  

Photonics Handbook
Research & TechnologyeducationAmericasUCLAimaginglight sourcesopticsSensors & Detectorscamerasneural networksroboticssmart camerasautomotiveautonomous vehiclescomputational imagingdiffractive optical neural networksdefenseAydogan Ozcan

Comments
back to top
Facebook Twitter Instagram LinkedIn YouTube RSS
©2019 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, info@photonics.com

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.