Search
Menu
Gentec Electro-Optics Inc   - Measure With Gentec Accuracy LB

Optical Advancements Enable High-Precision 3D Imaging

Facebook X LinkedIn Email
Advancements in engineered point spread function (E-PSF) technology, in the form of optical phase plates, will enable manufacturers to meet rising demands for ultraprecise object imaging.

LESLIE KIMERLING, WARREN COLOMB, and ANURAG AGRAWAL, DOUBLE HELIX OPTICS

As robotics and automation change the face of manufacturing, demands on industrial inspection have increased. Advancements in engineered point spread function (E-PSF) technology are now allowing manufacturers to incorporate high-resolution 3D imaging for improved object and feature inspection. These E-PSFs can be realized in the form of optical phase plates that can be incorporated into existing imaging systems (Figure 1).

Figure 1. An optical phase plate in its holder, with point spread function (PSF) designs that can be etched on the phase plate. Courtesy of Double Helix Optics.


Figure 1. An optical phase plate in its holder, with point spread function (PSF) designs that can be etched on the phase plate. Courtesy of Double Helix Optics.


Once integrated, imaging systems can perform close-range component inspection with improved depth of field and detail, allowing for the imaging and defect detection of smaller and smaller objects, down to the submicron scale.

Beyond inspection, 3D imaging holds promise for machine vision, object recognition, and navigation in high-growth applications such as drones, robotics, and haptics. It is now possible to integrate this novel optical technology into existing system designs with minimal impact on system footprints.

A look at limitations

While we live and move in a 3D world, the majority of imaging systems, historically, have been capable of capturing only 2D information. Methods to obtain and use 3D information in various settings, including manufacturing and robotics, have been studied for decades but remain a challenging problem to implement, particularly under unconstrained environments that may include variable lighting, specular and deforming scene surfaces, and occluded objects. These environments reduce spatial awareness, which in turn means that systems struggle to perform tasks that are easy for humans, such as selecting components from a bin. Unconstrained environments also create challenges for the inspection of submicron-size objects. However, as additive manufacturing technologies enable the creation of more components with features at the micron level, advanced 3D inspection and metrology have become crucial.

The past decade has seen a mini-explosion in the number of depth-sensing devices, or 3D cameras. These systems either use a stereo-vision setup or more recent innovations — such as structured illumination, time of flight, and light field — and are very useful for large-scale (centimeter to kilometer) measurement of the 3D shape and position of objects.

These imaging modalities, however, face various limitations when inspecting close-proximity objects in the centimeter-to-millimeter range. Stereo-vision systems, for example, become “cross-eyed” at short working distances. Structured light methods are limited by the ratio of the spatial frequency of the projected light and the features of the object, as well as the need for nonobscured illumination, which often requires steep illumination angles. Time-of-flight methods are limited by the timing rates of the sensors. Additionally, light-field methods are constrained by limited resolution and the size of the lenslet arrays. Each of these techniques may be additionally constrained by hardware complexity, size, power consumption, or cost. Compounding these limitations is the increased expectation that manufacturers can accurately measure 3D features of minute structures.

In the face of these challenges, a new approach to 3D object capture is inevitable and will extend capabilities and enable improvements in both precision and depth resolution in areas such as 3D machine vision, gesture recognition, and robotics.

Starting small

In 2008, W.E. Moerner of Stanford University and Rafael Piestun of the University of Colorado, Boulder partnered to advance a new superresolution microscopy technique to enable the study of cellular structures in full 3D. Unlike scanning, their imaging technique was able to capture the entire 3D volume of interest and has since enabled scientists to study cellular structures at the level of the individual molecule, in 3D.

Today this technique enables the creation of 3D structural models of whole cells, including mitochondria and the nuclear lamina, as well as viruses, bacteria, T-cells, and other structures1,2,3 fundamental to scientific discovery and drug development. Additionally, the technique has been able to take images at a frame rate capable of tracking the movement of single molecules inside and on the surface of a cell.

The essence of E-PSF technology

The essence of E-PSF technology is a simple alteration of the optical response of an imaging system, by precisely matching the size and design of a mask to the optical system and imaging conditions. In the case of microscopy, the phase mask is matched to the specifications of the microscope objective and the depth of focus requirements of the experiment.

The specially designed phase masks morph the optical response by introducing phase delays to certain portions of the wavefront, making it possible to change the shape of the PSF of in-focus and out- of-focus object points. The double-helix PSF (DH-PSF)4 design is one example, where the image of a single point is altered from the focused spot of light generated by the lens’s circular aperture (known as the Airy disc) to two well-separated spots (Figure 2).

Figure 2. PSF of a standard optical system compared to the PSF of the double helix phase mask. Courtesy of Double Helix Optics.


Figure 2. PSF of a standard optical system compared to the PSF of the double helix phase mask. Courtesy of Double Helix Optics.


The midpoint of these two well-separated spots corresponds to the lateral position of the object, and the angle between the two spots corresponds to the object’s axial position. Because the spots can stay in focus while rotating up to 180°, extended depth information can be captured with high precision.

The data collected using the DH-PSF consists of a number of these well-separated spots at different orientations corresponding to the object’s lateral (X-Y) and axial (Z) positions. Creating a sharp 3D image from this detailed data set of object points is a complex but solvable matter of image reconstruction. After processing, the result is a sharp, 3D construct of the original object (Figure 3).

Figure 3. A 3D double-helix PSF (DH-PSF) super-resolution image of microtubules (a) captures the detailed 3D information not seen in conventional 2D wide-field imaging (b). Courtesy of Double Helix Optics.


Figure 3. A 3D double-helix PSF (DH-PSF) super-resolution image of microtubules (a) captures the detailed 3D information not seen in conventional 2D wide-field imaging (b). Courtesy of Double Helix Optics.


Several types of E-PSFs have been designed for different applications based on the depth and precision requirements of the application and the signal-to-noise ratio (SNR) of the object. In addition to the DH-PSF, designs include single-helix PSF, tetrapod PSF5, and multicolor PSF designs6 (Figure 4).

Stanley Electric Co. Ltd. - IR Light Sources 4/24 MR

Figure 4. The Double Helix Optics’ library of phase masks. Courtesy of Double Helix Optics.


Figure 4. The Double Helix Optics’ library of phase masks. Courtesy of Double Helix Optics.


The first commercial application of E-PSF technology is an upgrade to existing wide-field microscopes for 3D superresolution imaging and tracking. The SPINDLE, which is a proprietary and registered product name, enables imaging and tracking down to the level of the individual molecule or nanoparticle. The SPINDLE installs seamlessly between any wide-field microscope and electron multiplying CCD (EMCCD) or scientific CMOS (sCMOS) cameras using standard C-mounts. An interchangeable library of phase masks enables optimization of the PSF to suit the application of the user. Applications include imaging of cellular structures with a precision of 20 to 25 nm and up to 20 µm of depth range, without the need to alter the user’s existing imaging system setup.

From microscopic to macroscopic

Although the initial applications of the E-PSF technology have been in superresolution microscopy, the physics of E-PSF can be broadly applied to any imaging system by scaling the size of the phase mask to the imaging system. When applied to the field of machine vision, for example, E-PSF overcomes many of the earlier mentioned challenges faced by other 3D-imaging technologies7,8,9.

Beyond inspection, 3D imaging holds promise for machine vision, object recognition, and navigation in high-growth applications such as drones, robotics, and haptics.

Furthermore, E-PSF phase plates can be integrated into many existing 2D imaging systems, either by direct integration or by way of a passive optical relay between the camera lens and the sensor. If a 2D camera can resolve the region of inspection, then E-PSFs give that system depth perception.

The first commercial 3D machine vision system incorporating E-PSF technology is now under development. The implementation of this technology will simultaneously provide a brightness map (a 2D image) as well as distance information (a depth map) so each object feature within a scene is associated with its precise location in 3D space (Figure 5).

Figure 5. Image of a credit card with embossed letters (a). Recovered depth map with depth (in µm) encoded in color (b). 3D view of the depth map overlaid with the brightness map (c). Courtesy of Double Helix Optics.


Figure 5. Image of a credit card with embossed letters (a). Recovered depth map with depth (in µm) encoded in color (b). 3D view of the depth map overlaid with the brightness map (c). Courtesy of Double Helix Optics.


The E-PSF approach to machine vision provides several advantages over existing methods:

  • Incorporating a phase plate reshapes the focal point to enable depth capture with limited impact on 2D system performance and with minimal shadowing.

  • The depth resolution and the depth of field of an E-PSF 3D-imaging system can be optimized by designing the E-PSF to match the 2D lens in use.

  • The E-PSF technology can be implemented as an add-on to existing 2D imaging systems or as phase plates that can be incorporated into an existing 2D lens system. For 3D machine vision systems designed explicitly for E-PSF, the phase plate adds virtually nothing to the volume or weight. For systems employing OEM or after-market imaging components, no second camera or additional light sources are needed in most instances.

E-PSF technology opens up numerous possibilities for industrial inspection, materials science, and other commercial applications by enabling conventional 2D imaging systems to simultaneously capture high-resolution depth and intensity information. Moreover, the sensor is amenable to mass production at low cost, enabling applications in areas such as robotics, 3D scanners, advanced manufacturing, and human-machine interfaces. Imaging sensors are now widespread and inexpensive, as is computing power — already an integral part of most cameras — creating the opportunity to add 3D capabilities at an additional but limited cost.

Meet the authors

Leslie Kimerling is co-founder and CEO of Double Helix Optics, a 3D-imaging company headquartered in Boulder, Colo. A serial entrepreneur, she has led multiple technology startups from launch through growth. She has a master’s degree in economics from Stanford University and an MBA from the University of California, Los Angeles (UCLA) Anderson School of Management; email: [email protected].

Warren Colomb is an optical systems engineer at Double Helix Optics. He has a Ph.D. in applied physics from the Colorado School of Mines; email: [email protected].

Anurag Agrawal is the principal optics scientist at Double Helix Optics. He has a Ph.D. in electrical engineering (computational optical imaging) from the University of Colorado, Boulder; email: [email protected].

Acknowledgments

Some of this material is based upon work supported by the NSF SBIR Grant IIP-1059286, Grant IIP 1534745, and Grant IIP 1353638. The imaging work was performed at the BioFrontiers Institute Advanced Light Microscopy Core.

References

1. A.-K. Gustavsson et al. (2018). 3D single-molecule super-resolution microscopy with a tilted light sheet. Nat Commun, Vol. 91, Issue 9, p. 123.

2. A.R. Carr et al. (2017). Three-dimensional super-resolution in eukaryotic cells using the double-helix point spread function. Biophys J, Vol. 112, pp. 1444-1454.

3. S. Jain et al. (2016). ATPase-modulated stress granules contain a diverse proteome and substructure. Cell, Vol. 164, pp. 487-498.

4. S.R.P. Pavani and R. Piestun (2008). High-efficiency rotating point spread functions. Opt Express, Vol. 16, p. 3484.

5. Y. Shechtman et al. (2015). Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions. Nano Lett, Vol. 15, Issue 6, pp. 4194–4199.

6. Y. Shechtman et al. (2016). Multicolour localization microscopy by point-spread-function engineering. Nat Photonics, Vol. 10, pp. 590-594.

7. A. Greengard et al. (2006). Depth from diffracted rotation. Opt Lett, Vol. 31, p. 181.

8. S. Quirin and R. Piestun (2013). Depth estimation and image recovery using broadband, incoherent illumination with engineered point spread functions. Appl Opt, Vol. 52, pp. A367-376.

9. R. Berlich et al. (2016). Single shot three-dimensional imaging using an engineered point spread function. Opt Express, Vol. 24, p. 5946.


Published: March 2019
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
engineered point spread functionE-PSFsmachine vision3D cameras3D machine visionW. E. MoernerRafael Piestundouble-helix PSFDH-PSFsroboticsdroneshapticsFeaturesimaging systems

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.