Search
Menu
Cognex Corp. - Smart Sensor 3-24 GIF LB

Following flow for less

Facebook X LinkedIn Email
Hank Hogan

Researchers at the University of Florida in Gainesville faced a problem familiar to investigators: They wanted to quantitatively visualize microflows. Such visualization is useful for investigating mixing inside a microfluidic chamber or channel.

TSDeconvolve_Fig1.jpg

Researchers used deconvolution microscopy and optical sectioning to visualize microfluidic flow. They examined a microchannel device with a microscope stage, capturing the image at a given depth within the microchannel (a). They then adjusted the objective to acquire a stack of images in discrete steps across the entire microchannel depth (b). From these images the flow can be reconstructed. Images reprinted with permission of Analytical Chemistry.

Typically, fluid microflows are quantified by particle imaging velocimetry, whereby particles are injected into a flowing fluid and tracked via a series of images captured by a detector. Measuring velocity in three dimensions requires a stereo setup, and the equipment can be expensive.

Confocal microscopy can be used as well. It employs a single imager to optically section the fluid, thereby stepping through it and characterizing its flow. However, the setup is expensive and complicated — barriers to widespread use.

The researchers, therefore, came up with another solution — in part because of their lack of access to the traditional equipment. Z. Hugh Fan, an associate professor of biomedical and mechanical and aerospace engineering at the university, said that interactions with other scientists in different disciplines helped them solve their problem.

They developed a method that combines inexpensive conventional optical microscopy with a computational deconvolution algorithm. Using the two, they produced images of three-dimensional flows in plastic microfluidic channels.

In deconvolution microscopy, a mathematical algorithm extracts underlying objects from an image composed of overlapping individual sources. The technique often is used to sharpen an image by removing the blurring out-of-focus light.

The researchers started by focusing a microscope at the desired depth within a microchannel. They then moved the objective in an axial direction, changing the focal plane in discrete steps until the entire channel depth was covered, to adequately sample the three-dimensional space. The size of the sampling step is related to the wavelength of the light, the refractive index of the material and the inverse of the square of the numerical aperture of the optics.

At each step, a CCD or other sensor captured the image, forming a series of stacked images, each from a different focal plane, that supplied the input for the deconvolution operation.

Fan noted that a key advantage to deconvolution microscopy is cost. The optical equipment itself is relatively inexpensive and widely accessible, as is the software.

A shortcoming is that tens of images must be collected and then subjected to mathematical algorithms before the results are known. However, Fan said that neither of these is much of an issue. “The first step can be automated, and the second step is done by commercial software. It does not need more computer power than a typical PC.”

It is also important to account for nonphotonic noise, such as might arise from the electronics of the camera, and to correct for nonuniformity of light sources, which can be done by taking a flat-field image (an image of a uniform thickness of a fluorophore solution). The flat-field image also accounts for sensitivity variations between sensor pixels, and data from the image is used in the algorithms as a correction.

Deconvolution microscopy is not quite as simple as pushing a button. There are three classes of algorithms. Inverse filter methods are fastest and least computationally intensive but the most sensitive to noise. The most computationally intensive — blind deconvolution schemes — are less susceptible to noise and do not require knowledge about the point spread function of the object or optics. Constrained iteration, a third algorithm, sits between these two extremes. Fan noted that different algorithms might work best in different situations, and so the optimum approach must be determined for each case.

Hamamatsu Corp. - Earth Innovations MR 2/24

In an experimental proof of the concept as applied to microchannel flows, the investigators fabricated plastic microfluidic devices with six channels that were 40 μm deep and 110 μm wide.

They acquired images with an Olympus America Inc. microscope, a cooled scientific-grade CCD camera from Apogee Instruments Inc. of Roseville, Calif., and various excitation and emission filters from Chroma Technology of Rockingham, Vt. For deconvolution software, they used a package from AutoQuant, which is now part of Media Cybernetics of Silver Spring, Md.

To discover which of the algorithms worked best for their particular setup, they sent a uniform flow of a solution containing fluorescein down a D-shaped channel. After capturing the raw image, they processed it using the various deconvolution methods. A variation of blind deconvolution ended up being the algorithm that best reproduced the known flow in the channel.

TSDeconvolve_Fig2.jpg
These cross-sectional images of fluorescein flow in a D-shaped channel help determine the best deconvolution method for the setup. The dashed lines indicate the expected shape. On the left is the raw image as collected. Starting with the second image and rightward are corrected images using various deconvolution algorithms. On the far right is the algorithm that gave the best result. (MLE = maximum likelihood estimation.)

Armed with that information, they studied microflow mixing by sending a fluorescein solution down one microchannel, water down the other, and then merging the two into a combined flow that traveled down a ridged microchannel. The raw data did not reveal anything, but after deconvolution, the images confirmed the presence of twisting flows at the ridges. The work is detailed in the Feb. 6 online version of Analytical Chemistry.

Advances in technology may make the deconvolution microscopy method more powerful and faster. Jonathan Girroir, product marketing manager at AutoQuant, noted that upcoming releases of the software will take advantage of 64-bit operating systems and multiple processor computers — PC innovations that will become more commonplace over time.

TSDeconvolve_Fig3.jpg
A microchannel is shown with an extended view of a ridged section of the channel (a). M indicates the intersection of the channels connecting wells 1, 2 and 3. Researchers pumped streams of a fluorescent solution and water into the channel from wells 1 and 2, with equal flow velocity (2.3 cm/s), resulting in a twisting flow of the two. The raw image stack is presented in (b), while the deconvoluted image stack is in (c).

Multiple processors will allow the deconvolution computational load to be more easily handled, while 64-bit operating systems will allow more memory to be used for deconvolution, which can be important in certain situations. “Having access to more than 2 GB is extremely desirable when working with large data sets,” Girroir said.

According to Fan, the researchers plan to put their method to work in areas where it has already been proved. “We are focusing on using the technique for studying mixing.

Contact: Z. Hugh Fan, University of Florida; e-mail: [email protected]; Jonathan Girroir, Media Cybernetics Inc.; e-mail: [email protected].

Published: April 2007
Glossary
microscope stage
The component of a microscope on which the sample or slide to be examined is placed. Depending on the design of the microscope, the stage may play a passive role of just supporting the sample or slide being viewed, or it may be moved as part of the focusing process.
Basic ScienceBiophotonicsmicrofluidic chambermicroscope stageMicroscopyResearch & TechnologySensors & DetectorsUniversity of Florida

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.