Search
Menu
BAE Systems Sensor Solutions - Fairchild - Thermal Imaging Solutions 4/24 LB
Photonics HandbookBioOpinion

Life at high speed

Facebook X LinkedIn Email
JAMES W. BALES, MIT EDGERTON CENTER

As the high-speed imaging expert at the Massachusetts Institute of Technology (MIT) Edgerton Center, I meet researchers from all fields, each with unique high-speed imaging needs. I’ve seen two such applications from researchers as they tried to understand and address various problems in the biological sciences.

JAMES W. BALES, MIT EDGERTON CENTERDoctoral candidate Alex Abramson, who works in MIT’s Langer Lab, came to my office with an unusual imaging problem. He had been developing a small capsule to deliver insulin orally to a patient. The 7-mm-tall capsule was designed to always land in the correct orientation against the stomach wall, where it would then automatically inject its payload of insulin into the mucosa. This process would prevent the stomach from simply digesting the insulin.

Alex had designed the capsule to have a particular mass distribution and shape — inspired by the shell of the leopard tortoise — so that it was self-righting and always settled on its base regardless of its orientation when released. However, he needed to see and understand the dynamics of this motion on a millisecond timescale. Additionally, he needed to see how the capsule behaved when released in air, water, and gastric juice.

In any imaging task, success hinges on one’s choice of camera, lens, and lighting. Let’s break this out for Alex’s case.

His camera of choice was a high-speed video camera, capable of capturing megapixel images at thousands of frames per second. These systems typically digitize each pixel to 12 bits, generating data at a rate of tens to hundreds of gigabits per second. The rates are so high that the images cannot be streamed to a computer. Instead, the image data is written into a buffer of high-speed memory inside the camera.

The finite size of the buffer limits total recording time — typically between a few tenths of a second to many tens of seconds, depending on the frame rate and image size. Alex needed less than a second of recording time, so he was fine.

Lighting is often the greatest challenge in any imaging task. For two of Alex’s tests — with the capsule in air and in water — the lighting was straightforward. For the test in gastric juice, we also explored the use of a backlighting for success.

Alex and his collaborators are succeeding in their effort, with their work published in the Feb. 8 issue of Science1.

Professor Lydia Bourouiba’s research at MIT focuses on the interface between fluid dynamics and epidemiology. Among other topics, she and her group look at the droplets that fly around when one sneezes or flushes a toilet. They quickly discovered that the ejecta from a sneeze is more complicated than simply a cloud of droplets. Strings and streamers of mucus also break into drops and blobs.

Capturing these images required overcoming many difficulties; difficulties that required Lydia to make some tough trade-offs in terms of the data she could collect from a given video. Let’s walk through one of her experiments.

QPC Lasers Inc. - QPC Lasers is LIDAR 4-24 MR

Ideally, Lydia would like to image a large area, so she can track the trajectory of a blob of fluid as it travels several feet through the air. She needs a high spatial resolution, to see small drops, at a rate of 1000 fps or higher. Unfortunately, imagers for high-speed video cameras tend to have lower resolution than still imagers, for the simple reason that it is difficult to read out megapixel-size images on submillisecond timescales. So, she designed separate experiments for tracking droplets over large distances and for viewing droplets at the fine scale, finding the right camera for each task.

For Lydia’s imaging, the hardest task was creating the ideal lighting for seeing the spray of drops. One very effective technique is backlighting, where the camera looks at a lit screen, and where the subject — in this case, the drops from the sneeze — flies between the camera and screen, roughly orthogonal to the camera’s line of sight.

Oblique lighting is another useful technique. Here, the camera looks a black backdrop and light is brought in at right angles to the camera’s line of sight. A droplet in front of the camera will scatter light into the lens and appear as a bright spot against the dark backdrop. The effect is analogous to dark-field microscopy.

In the visible and near-IR spectra, current cameras are capable of 6-MP images at 8000+ fps, or 1-MP images at 25,000 fps. I expect to see further incremental improvements in these parameters. Additionally, I believe we will see enhanced high-speed capability with SWIR systems as well. LED lighting continues to improve, and for many applications a brighter light is equivalent to a more sensitive camera.

Biomechanics will continue to find new ways to characterize motion, and more of those experiments will use multiple cameras to track organisms and motions in 3D.

Meet the author

James W. Bales is associate director of the Edgerton Center at MIT, where he has been since 1998. He is the instructor for the 6.163 Strobe Project Lab, the photography course that Doc Edgerton offered for many years. In addition, he is the instructor for 6.070/EC.120 Electronics Project Lab. Bales has studied the optical properties of semiconductors, built robot submarines, debugged assembly language programs, traveled as far north as Greenland and as far south as Tasmania, and taught electronics and high-speed imaging.

Reference

1. A. Abramson et al. (2019). An ingestible self-orienting system for oral delivery of macromolecules. Science, Vol. 363, Issue 6427.

The views expressed in Biopinion are solely those of the author and do not necessarily represent those of Photonics Media. To submit a Biopinion, send a few sentences outlining the proposed topic to [email protected]. Accepted submissions will be reviewed and edited for clarity, accuracy, length, and conformity to Photonics Media style.

Published: April 2019
Glossary
field of view
The field of view (FOV) refers to the extent of the observable world or the visible area that can be seen at any given moment through a device, such as an optical instrument, camera, or sensor. It is the angular or spatial extent of the observable environment as seen from a specific vantage point or through a particular instrument. Key points about the field of view include: Angular measurement: The field of view is often expressed in angular units, such as degrees, minutes, or radians. It...
MIT Edgerton CenterLanger Labhigh-speed imagingbiological sciencesField of Viewlarge-area imagingMicroscopyBioOpinion

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.