Search
Menu
Meadowlark Optics - SEE WHAT

Generative AI Achieves Superresolution with Minimal Tuning

Facebook X LinkedIn Email
GÖRLITZ, Germany, May 2, 2024 — Diffusion models for artificial intelligence (AI) produce high-quality samples and offer stable training, but their sensitivity to the choice of variance can be a drawback. The variance schedule controls the dynamics of the diffusion process, and typically it must be fine-tuned with a hyperparameter search for each application. This is a time-consuming task that can lead to suboptimal performance.

A new open-source algorithm, from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Imperial College London, and University College London, improves the quality and resolution of images, including microscopic images, with minimal fine-tuning.

The algorithm, called the Conditional Variational Diffusion Model (CVDM), learns the variance schedule as part of the training process. In experiments, the CVDM’s approach to learning the schedule was shown to yield comparable or better results than models that set the schedule as a hyperparameter.

The CVDM can be used to achieve superresolution using an inverse problem approach.

The availability of big data analytics, along with new ways to analyze mathematical and scientific data, allow researchers to use an inverse problem approach to uncover the causes behind specific observations, such as those made in microscopic imaging.
A fluorescence micrograph taken from the open BioSR superresolution microscopy dataset (above and left of the dashed line) is compared to the same picture that has been reconstructed by the CVDM (below and right of the dashed line). Image depicts fluorescently labeled F-actine cytoskeletal proteins of the cell. Courtesy of A. Yakimovich/CASUS, modified image from the BioSR dataset by Chang Qiao and Di Li.
A fluorescence micrograph taken from the open BioSR superresolution microscopy dataset (above and left of the dashed line) is compared to the same picture that has been reconstructed by the CVDM (below and right of the dashed line). The image depicts fluorescently labeled F-actine cytoskeletal proteins of the cell. Courtesy of A. Yakimovich/CASUS; the modified image is from the BioSR dataset by Chang Qiao and Di Li.

“You have an observation — your microscopic image,” researcher Gabriel della Maggiora said. “Applying some calculations, you then can learn more about your sample than first meets the eye.”

By calculating the parameters that produced the observation, i.e., the image, a researcher can achieve higher-resolution images. However, the path from observation to superresolution is usually not obvious, and the observational data is often noisy, incomplete, or uncertain.

Diffusion models help researchers solve complex inverse problems. To generate images, the diffusion model learns which pixel arrangements are common and which are uncommon in the training dataset images. It generates the new, desired image bit by bit, until it arrives at the pixel arrangement that correlates most closely with the underlying structure of the training data.

Teledyne DALSA -  Line Scan Leader 5/24 MR

The model is sensitive to the choice of the predefined schedule that controls the diffusion process, including how the noise is added. When too little or too much noise is added, at the wrong place or wrong time, the result can be a failed training. Unproductive runs hinder the effectiveness of diffusion models.

“Diffusion models have long been known as computationally expensive to train . . . But new developments like our Conditional Variational Diffusion Model allow minimizing ‘unproductive runs,’ which do not lead to the final model,” researcher Artur Yakimovich said. “By lowering the computational effort and hence power consumption, this approach may also make diffusion models more eco-friendly to train.”

The researchers tested the CVDM in three applications — superresolution microscopy, quantitative phase imaging, and image superresolution. For superresolution microscopy, the CVDM demonstrated comparable reconstruction quality and enhanced image resolution compared to previous methods. For quantitative phase imaging, it significantly outperformed previous methods. For image superresolution, reconstruction quality was comparable to previous methods. The CVDM also produced good results for a wild clinical microscopy sample, indicating that it could be useful in medical microscopy.

Based on the experimental outcomes, the researchers concluded that fine-tuning the schedule by experimentation should be avoided, because the schedule can be learned during training in a stable way that yields the same or better results.

“Of course, there are several methods out there to increase the meaningfulness of microscopic images — some of them relying on generative AI models,” Yakimovich said. “But we believe that our approach has some new, unique properties that will leave an impact in the imaging community, namely high flexibility and speed at a comparable, or even better quality, compared to other diffusion model approaches.”

The CVDM supports probabilistic conditioning on data, is computationally less expensive than established diffusion models, and can be easily adapted for a variety of applications.

“In addition, our CVDM provides direct hints where it is not very sure about the reconstruction — a very helpful property that sets the path forward to address these uncertainties in new experiments and simulations,” Yakimovich said.

The work will be presented by della Maggiora at the International Conference on Learning Representations (ICLR 2024) on May 8 in poster session 3. ICLR 2024 takes place May 7-11, 2024, at the Messe Wien Exhibition and Congress Center, Vienna.

The research was published in the Proceedings of the Twelfth International Conference on Learning Representations, 2024 (www.arxiv.org/abs/2312.02246).

Published: May 2024
Glossary
artificial intelligence
The ability of a machine to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, and problem solving.
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
superresolution
Superresolution refers to the enhancement or improvement of the spatial resolution beyond the conventional limits imposed by the diffraction of light. In the context of imaging, it is a set of techniques and algorithms that aim to achieve higher resolution images than what is traditionally possible using standard imaging systems. In conventional optical microscopy, the resolution is limited by the diffraction of light, a phenomenon described by Ernst Abbe's diffraction limit. This limit sets a...
inverse problem
Any problem that requires retrieval of the distribution of some internal properties, such as temperature concentration, etc., from remotely sensed data.
Research & TechnologyeducationEuropeHelmholtz Zentrum Dresden RossendorfCenter for Advanced Systems UnderstandingMicroscopyImagingOpticsartificial intelligencedeep learningsuperresolutionBiophotonicsvariational diffusion modelsgenerative AIinverse problem

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.