Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Image Recovery Method Improves Compressive Sensing, Phase Retrieval

JOEL WILLIAMS, ASSOCIATE EDITOR
joel.williams@photonics.com

Research from Lawrence Livermore National Laboratory (LLNL) has yielded a method of compressive image recovery that is trained on patches of images, rather than full-size images. The method, called GPP (generative patch prior), is able to recover a wide variety of natural images. It compares favorably with other existing methods, the researchers said, in its ability to perform compressive image sensing and complete compressive phase retrieval tasks.

Compressive sensing is a class of methods that aids in designing new image sensors that, rather than sensing individual pixels, sense coded or modulated version of several pixels in one go, said Rushil Anirudh, a computer scientist in the machine intelligence group at LLNL. With the proper algorithm, the modulation can be undone to recover the original information.

“This implies that sensor systems can be designed so that higher-resolution images can be recovered from lower-resolution sensors,” Anirudh told Photonics Media. “Since we are seeking to get ‘more for less,’ the algorithms need to be designed carefully with image ‘priors’ that have the same understanding of what the image data could be like. We find that this is much more powerful than techniques proposed in the past."

With GPP, the priors on which the algorithm is trained are composed of patches of images, rather than whole images. Priors, properties of images such as spatial coherence, help the algorithm fill in gaps based on its understanding of images.

Patch-based priors, Anirudh said, were once popular; they were computationally easier to deal with than entire images. Today, he said, in the deep-learning era, we are witnessing a resurgence of path-based methods.

The new technology is designed to solve “inverse problems” in which an original scene or image must be estimated with a limited amount of data

“As a simple example of a classic inverse problem, say we are shown only a random 10% of the pixels in an image, and asked to fill out the remaining 90%. If we guess the remaining pixels at random, we are not going to get a solution that looks anything like the original image,” Anirudh said. “The central idea behind GPP is to exploit the fact that images can be broken down into small patches and it is much easier for a machine learning model to approximate all the variations in patches than in images due to the reduced size.”

A model that can capture the diversity and complexity of all images does not yet exist — though the team observed that teaching a model that can capture most of the variations in patches is a relatively simple process.

“GPP obtains higher-quality solutions in compressive sensing even under extreme settings where very few measurements are available,” Anirudh said.

The research team, consisting of personnel from Mitsubishi Electric Research Laboratories and Arizona State University, showed that the method is able to outperform several common methods in compressive sensing and compressive phase retrieval tasks. GPP, the researchers concluded, is more broadly applicable to a variety of images than existing generative priors. They proposed a method of self-calibration that enables the model to automatically calibrate itself against real-world sensor distortions and corruptions, and showed that it performed well on a number of real-world baselines.

The research was presented at the 2021 IEEE Winter Conference on Applications of Computer Vision and received the conference’s Best Paper, Honorable Mention award.



Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media