Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Lighting Advancements Add Muscle to Machine Vision

STEVE KINNEY, SMART VISION LIGHTS



Courtesy of iStock.com/MotoEd.

After years of investment and research, light-emitting diodes (LEDs) are cheaper and brighter than ever before, and they consume less power. With reliability measured in thousands of hours, LEDs are proving to be cost-effective solutions in both commercial lighting and machine vision applications. Where a 40-W incandescent bulb could be replaced by a 10-W LED source just a few years ago, LED-based replacement bulbs today consume only about 6 W1.

LED efficiency gains have also allowed machine vision lighting suppliers to develop LED-based options with sufficient brightness to replace tungsten incandescent bulbs and halogen-based illumination systems. While some industrial settings use cool white lights for illumination, many machine vision applications rely on colored, infrared (IR), and ultraviolet (UV) LEDs to optimize key details in captured images.

LED efficiency can be quantified by dividing the optical output, as measured in lumens, by the total energy input. Energy input that is not converted to light is converted to heat and must be dissipated. As efficiencies have increased, LEDs have come to produce more light with less heat. This, in turn, has decreased heat dissipation requirements to the point where some LED lighting products can integrate driver and control electronics. This development has introduced several significant benefits for machine vision applications.

Integrated lighting controls, for example, enable features such as adaptive illumination, which, in turn, supports imaging systems with dynamic fields of view (FOVs). Embedded control electronics also significantly reduce latency issues associated with cable runs from external LED drivers. Improved latency allows more control over the shape and precision of LED pulses, which is important when syncing light pulses with camera exposure times.

Fast-focusing machine vision

If a machine vision application lacks a predefined set of focus distances for the optical character recognition (OCR) reader or camera, it may require a liquid lens or autofocus mechanism to help the imager adapt to application demands in real time. Autofocus is used in machine vision applications where the distance between the camera lens and target object may change between image acquisitions2. This situation occurs often in many embedded vision applications, such as industrial hand-held scanners, barcode readers, portable data terminals, and smart cameras.

Autofocus optical systems comprise a sensor, a controller, and a mechanical or optical element to focus on an automatically or manually selected point or area. Conversely, liquid lens technology is based on cells containing an optical-grade fluid. Mechanical or electrical controls can change the shape of these cells to produce electrostatic pressure and rapidly change the lens curvature.

Both mechanical and liquid-lens autofocus systems allow users to set and save different focus values for reading codes at various depths of field, or for the inspection of variously sized parts. Increased demand for automation has driven the development of even more dynamic and higher-speed autofocusing systems. Such fast-focusing systems allow manufacturing and logistics operations more flexibility when inspecting multiple different products, packages, or processes within the same operation. Dynamic autofocus systems are now available on a multitude of 2D and 3D vision systems targeting production lines that require regular part changes or that involve mixed-model processing of variously sized objects. By automatically refocusing in milliseconds, these systems can reduce the number of cameras needed for large depth-of-field machine vision applications.

However, because proper lighting can make or break a machine vision application, simply adjusting the focus within a large depth of field may not ensure consistently crisp images if the illumination does not adapt as well.

Adaptive illumination

Advanced adaptive illumination systems were specifically designed to accommodate camera autofocus features by dynamically changing the systems’ FOV to focus light where it is needed most. Such systems rely on integrated lighting controls and can be set up to automatically adjust the FOV based on application needs and real-time data (Figure 1).



Figure 1. Advanced adaptive illumination systems were specifically designed to accommodate autofocus cameras by dynamically changing the field of view where the systems focus their light. Such systems rely on integrated lighting controls and can be set up to automatically adjust the field of view based on application needs and real-time data. Courtesy of iStock.com/MotoEd.

Adaptive illumination is available in linear bar formats that may feature three rows of LEDs equipped with 10º, 30º, and 50º lenses controlled on separate electronic channels. Tunable FOV technology can also be integrated into square and ring lights.

Because each channel controls a dedicated narrow, medium, or wide lens angle, users can achieve maximum projection over a narrow area by only energizing the 10º lens channel. Or, to illuminate larger areas, the 30º or 50º channel can be energized.

Discrete control of each row further allows users to mix and match the positioning of the 10º, 30º, and 50º lens channels to achieve balanced, uniform FOV from far to near, offering precise control over illumination of the imaging system’s working distance.

This flexibility is particularly beneficial in logistics applications, where machine vision systems monitor fast-moving objects that can range in size from 1-m boxes to flat packs and envelopes.

In distribution centers operated by UPS and FedEx, for example, vision systems scan barcodes to sort variously sized packages as they zip past on conveyor lines. These conveyors often pass through tunnel structures in which barcode readers and lights are mounted more than a meter above the moving belt (Figure 2). The distance between the reader and each barcode can change, however. This presents a challenge. The focus and lighting requirements when reading labels on small flat packs that are more than 3 ft away are very different than the requirements for scanning a large box top that may be inches from the camera lens.



Figure 2. In logistical distribution centers, vision systems are mounted within tunnel structures through which packages pass on conveyor lines. Because packages can vary widely in size and orientation, adaptive lighting and autofocus cameras are often used to accurately image details at various distances from the camera. Courtesy of Smart Vision Lights.

Such systems are equipped with sensors that detect the height of each package and automatically adjust both the camera’s depth of field and the adaptive illumination system’s FOV before capturing barcode or image data. Integrated LED controls allow real-time adjustment of the projection angle, optimizing illumination for the height of each package and reducing misread labels (Figure 3).



Figure 3. Optimizing the projection angle of LED lighting allows the system to focus on the variable heights of different packaging, reducing misread labels on items ranging from flat packs to 1-m-tall boxes. Courtesy of Smart Vision Lights.

Squaring pulse shape

External LED lighting controllers are subject to unavoidable parasitic electrical losses in the cables that connect the controller to the LED die. In practical terms, this translates as a loss of latency. It takes longer for the LED to respond to external controls and input.

While parasitic losses may not be a problem in comparatively low-speed machine vision applications, such as barcode verification and mark quality assessment, they become problematic in high-speed applications that require precise syncing of light pulses with camera exposure times.

If fast-moving objects appear blurry in an image, it is often necessary to shorten the camera’s exposure time. Short exposure times, however, generally require more intense and evenly distributed lighting.

In high-speed imaging applications, it is not enough to simply flood the scene with photons. Xenon lamps generally emit higher-intensity light and shorter pulses compared to LEDs. However, due to the fast off times and poor pulse control of these lamps, only about 10% of their light can be effectively utilized.

LEDs supply better control over the duration, intensity, and shape of each pulse. This is a key consideration because flux density, defined as the number of photons per unit area projected within the target FOV per second, has a greater impact on image capture than the absolute intensity of a light source.

Pulsing LEDs at higher currents can also increase their output by 3 to 8× in strobing applications.

Overdriving LEDs at lower duty cycles can conceivably achieve 100,000 strobes per second at up to 200,000 lux without exceeding thermal limits. For example, overdriving at a 10% duty cycle means that the light is on only 10% of the time, so the average power falls to one-tenth of what it would be in continuous operation. Additionally, supplying maximum output in a very short pulse minimizes heat dissipation requirements. But again, supplying full light intensity without image degradation requires precise control of the pulse shape as well as its rate.

The same integrated LED driver and controller technology that enables adaptive lighting also dispenses with the cabling required to attach external controllers and, with it, many of the latency issues that spread the edges of LED pulses. When electronics are integrated directly on the motherboard of the LED source, high-current pulses from the driver are delivered directly across comparatively short printed circuit board (PCB) traces to the LED die to deliver tens of amps of power in a substantially shorter time, without the parasitic losses and delays associated with external drivers. As a result, the LED can achieve full output within 500 ns or less with comparable off time.

When applied to high-speed imaging, such control enables each pulse to have a sharper square shape that more fully syncs with camera shutter times to ensure 100% intensity throughout the shutter cycle and provide evenly distributed high-intensity light (Figure 4).



Figure 4. Comparative pulse widths for machine vision light sources The shape of an LED pulse is just as important in high-speed imaging applications as the light source’s absolute intensity. While xenon lamps (black line) have comparatively higher intensities than LEDs, only 10% of the light from these lamps is useful, due to their poor pulse control. LED sources with external controllers (red line) contend with inescapable parasitic impedances often associated with long cable connections. Integrating the driver and controls close to the LED die on an LED motherboard eliminates impedance, allowing the LED to achieve full power within 500 ns, and produces sharply defined square-shaped pulses (blue line) in the microsecond range. Courtesy of Smart Vision Lights.


Illumination versatility

In addition to its ability to freeze motion in high-speed imaging applications, LED lighting with integrated controls makes more effective use of the light it generates, even at lower duty cycles. This further reduces heat generation and extends the lifetime of the LED array.

Even in more conventional lower-speed applications, integrated LED drivers can run at less than 100% power in continuous mode, which also significantly extends LED life without affecting flux density.

Further, adaptable LED lighting with tunable FOV based on discretely controlled rows of LEDs offers generational improvements in efficiency when compared to legacy sources.

Meet the author

Steve Kinney is director of training, compliance, and technical solutions at Smart Vision Lights. He is also a member of the A3 board of directors and a past chairperson of A3’s Camera Link Committee; email: steve.kinney@smartvisionlights.com.

References

1. Volt Lighting (Jan. 13, 2022). Lumens to watts conversion chart: choose the right LED bulb, www.voltlighting.com/learn/lumens-to-watts-conversion-led-bulb.

2. P. Kumar (Oct. 14, 2021). E-con Systems blog. Liquid lens autofocus vs voice coil motor (VCM) autofocus, www.e-consystems.com/blog/camera/technology-thursday/liquid-lens-autofocus-vs-voice-coil-motor-vcm-autofocus.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media