Search
Menu
PI Physik Instrumente - Revolution In Photonics Align ROSLB 3/24

NIH Researchers Speed Image Processing for Fluorescence Microscopy

Facebook X LinkedIn Email
Advances in image processing techniques could reduce post-processing time for highly complex microscopic images up to several thousandfold, based on work done by a research team at the National Institutes of Health (NIH) and their collaborators at the University of Chicago and Zhejiang University.

The team led by Hari Shroff, chief of laboratory on High Resolution Optical Imaging at the National Institute of Biomedical Imaging and Bioengineering (NIBIB), took a three-step approach to improving processing time. First, it modified the classic deconvolution algorithm that is used to reduce the blurring of tiny objects captured by the microscope’s video. Deconvolution traditionally requires significant computing power and time. By adapting an approach that was originally proposed for computed tomography, the team was able to accelerate deconvolution by more than tenfold for fluorescence microscopy.

According to Shroff, the improved algorithm can be applied to almost any fluorescence microscope. “It’s a strict win, we think. We’ve released the code and other groups are already using it,” he said.

3D image of a mouse intestine with different antibodies in green, red, yellow, and purple. Courtesy of the National Institute of Biomedical Imaging and Bioengineering.

3D image of a mouse intestine with different antibodies in green, red, yellow, and purple. Courtesy of the National Institute of Biomedical Imaging and Bioengineering.

To reduce the time needed to position and stitch together multiple views of a sample from different angles, the researchers applied a parallelization approach. Instead of processing each function individually, they used parallelization to concurrently analyze, position, and combine multiple views. They demonstrated a tenfold to more than a hundredfold improvement in processing speed.

“Our improvements in registration and deconvolution mean that for data sets that fit onto a graphics card, image analysis can in principle keep up with the speed of acquisition,” Shroff said. “For bigger data sets, we found a way to efficiently carve them up into chunks, pass each chunk to the GPU, do the registration and deconvolution, and then stitch those pieces back together. That’s very important if you want to image large pieces of tissue, for example, from a marine animal, or if you are clearing an organ to make it transparent to put on the microscope. Some forms of large microscopy are really enabled and sped up by these two advances.”

4 x 2 x 0.5 mm3 volume of brain from fixed and cleared mouse. Progressively higher resolution subvolumes highlight the detailed structures with isotropic submicron resolution. Courtesy of Yijun Su, Yicong Wu, Harshad Vishwasrao, and Ted Usdin, NIH/National Institute of Biomedical Imaging and Bioengineering.

4- × 2- × 0.5-mm3 volume of brain from fixed and cleared mouse. Progressively higher resolution subvolumes highlight the detailed structures with isotropic submicron resolution. Courtesy of Yijun Su, Yicong Wu, Harshad Vishwasrao, and Ted Usdin, NIH/National Institute of Biomedical Imaging and Bioengineering.

The researchers further reduced the time needed to process data by training a neural network to produce cleaner and higher resolution images. They used deep learning to accelerate complex deconvolution of data sets in which the blur varies significantly in different parts of the image. They trained the computer to recognize the relationship between badly blurred data (the input) and a cleaned, deconvolved image (the output). Then they gave it blurred data it hadn’t seen before. The trained neural network was able to produce deconvolved results quickly. “That’s where we got thousandsfold improvements in deconvolution speed,” Shroff said.

Rocky Mountain Instruments - Infrared Optics MR

Although the deep learning algorithms worked well, “it’s with the caveat that they are brittle,” Shroff said. “Meaning, once you’ve trained the neural network to recognize a type of image, say a cell with mitochondria, it will deconvolve those images very well. But if you give it an image that is a bit different, say the cell’s plasma membrane, it produces artifacts. It’s easy to fool the neural network. 

“Deep learning augments what is possible,” Shroff said. “It’s a good tool for analyzing data sets that would be difficult any other way.”

Lateral and axial images of 32-hour zebrafish embryo, marking cell boundaries within and outside the lateral line primordium. Courtesy of Harshad Vishwasrao and Damian Dalle Nogare, NIH/National Institute of Biomedical Imaging and Bioengineering.

Lateral and axial images of a 32-hour zebra fish embryo, marking cell boundaries within and outside the lateral line primordium. Courtesy of Harshad Vishwasrao and Damian Dalle Nogare, NIH/National Institute of Biomedical Imaging and Bioengineering.

The advances made by Shroff’s team expand the use of existing technology, allowing, for example, the imaging of thick samples that produce huge amounts of data when examined with fluorescence microscopes. While advances in microscopy have provided increasingly complex, high-resolution images, computing power has so far limited the techniques that researchers can use to process this data.

“Acquiring modern imaging data is a bit like drinking from a firehose,” Shroff said. “These methods help us obtain valuable biological information faster, which is essential, given the massive amount of data that can be produced by these microscopes.”

Lateral and axial images of C. elegans embryo expressing neuronal (green) and pan-nuclear (magenta) markers. Isotropic resolution enables lineage tracing and inspection of neurite outgrowth pre-twitching. Credit: L. Duncan and M. Moyle, Yale School of Medicine

Lateral and axial images of
C. elegans embryo expressing neuronal (green) and pan-nuclear (magenta) markers. Isotropic resolution enables lineage tracing and inspection of neurite outgrowth pre-twitching. Courtesy of L. Duncan and M. Moyle/Yale School of Medicine. 

These advances also could support the use of computational microscopy, in which the post-processing of raw data is necessary to produce the final high-resolution image. Shroff and his collaborators hope that their work will encourage researchers to try new approaches that may otherwise have been deemed too labor-intensive.

The research was published in Nature Biotechnology (www.doi.org/10.1038/s41587-020-0560-x). 

Published: July 2020
Glossary
fluorescence microscopy
Observation of samples using excitation produced fluorescence. A sample is placed within the excitation laser and the plane of observation is scanned. Emitted photons from the sample are filtered by a long pass dichroic optic and are detected and recorded for digital image reproduction.
superresolution
Superresolution refers to the enhancement or improvement of the spatial resolution beyond the conventional limits imposed by the diffraction of light. In the context of imaging, it is a set of techniques and algorithms that aim to achieve higher resolution images than what is traditionally possible using standard imaging systems. In conventional optical microscopy, the resolution is limited by the diffraction of light, a phenomenon described by Ernst Abbe's diffraction limit. This limit sets a...
Research & TechnologyeducationAmericasNational Institutes of HealthNational Institute of Biomedical Imaging and BioengineeringMarine Biological LaboratoryUniversity of ChicagoImagingLight SourcesMicroscopylight-sheet microscopyfluorescence microscopyOpticssuperresolution3D imagingmedicalBiophotonicsenvironmentHari Shrofffluorescence imagingTech PulseBioScan

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.