The demand for high-speed imaging has increased for a broad range of applications, from automotive crash testing to animal behavior to product component performance.
- High-Speed Video: Selecting a Slow-Motion Imaging System
Andrew Bridges, Photron USA, Inc.There is a growing market for imaging systems that provides an immediate, slow-motion view of a process that allows one to see events that happen too quickly for the human eye to perceive or comprehend.
The process of selecting a system that will suit a particular need or application can be difficult because of the wide range of available systems. This article will serve as a guide for evaluating the performance parameters and specifications of a particular system. Whether solving a costly production-line jam, watching a dummy’s head hit a steering wheel in a 50-mph head-on crash, capturing a shark attack on a Cape fur seal (Figure 1), checking the sabot separation from a tank-killing shell or simply trying to adjust your golf swing, the following will provide an overview. This discussion will include information about several cameras and systems and what questions to consider before purchasing a slow-motion imaging system.
Figure 1. Shark attacking a Cape fur seal. Footage captured at 1000 fps (1024 × 1024 pixels) with Photron’s ultima APX camera. Courtesy of BBC-TV’s Planet Earth, “Pole to Pole” Episode.
High-speed video cameras operate across a wide range of frame rates — from 60 frames per second (fps) to over one million fps. All high-speed video cameras operate at full resolution up to a certain speed, and then reduce the resolution, or window, to achieve higher speeds. It’s important to establish what frame rate you require to capture the event that you are viewing in slow motion. When recording a cyclical production, such as labeling or packaging that takes place at x number of times per second, it generally requires a minimum of three images per cycle to view and understand the phenomenon. If a box folding process on a production line occurs at 6000 units per minute, the procedure obviously will equal 100 boxes folded per second. With the above rule, it will be necessary to record the box folding process at a minimum of 300 fps to capture the process for easy viewing and comprehension.
Figure 2. High-speed image sequence of Atlas missile launch.
If the event is not cyclical, such as a missile launch (Figure 2) or a vehicle impact test, then careful planning is required to capture the action at the most significant moment. It is important to determine what temporal detail must be measurable in the finished image sequence or output video.
In an automotive crash test (recorded at 1000 fps, per federal mandate), most of the action occurs within 0.01 s or 10 ms. In recording a missile launch, the speed of the action can be even higher. If a projectile is traveling at 500 m/s (the Sidewinder missile easily exceeds this), and there is a 100-m field of view (FOV), it will pass through the image window in 0.2 s or 200 ms.
However, if you need to capture 100 frames within this 100-m FOV, you will need a camera that can take an image every 2 ms, which equates to 500 fps. If the FOV is reduced to 10 m while all other criteria remain the same, it will require 10 times (5000 fps) the speed to capture the same 100 frames. Frame rate comes down to how many images you want to see of the event, regardless of whether it is per cycle or the whole event.
Another area to consider when evaluating a high-speed digital video system is record duration or record time. This is often confused with how the camera is triggered, which will be discussed later. The real question is: How long does the process last or how much (in seconds) of the event need be recorded? High-speed videos use on board digital random access memory (RAM) to save the images. There are ways to push the record duration such as reducing the speed or resolution, but in essence you have to determine how long you need to record. The latest systems also enable users to push the recording time by reducing the bit-depth of the pixels recorded (more on this topic later). Reducing the bit depth from 12 to 8 bits will produce a fifty percent increase in recording capacity.
If the event is occurring intermittently, then the question is “How long a record time do I need?” A more relevant question would be “How do I trigger the camera so that I capture video every time the problem occurs?” Digital high-speed cameras can remain in record mode almost indefinitely as they cycle the data through their memory buffer on a first-in/first-out (FIFO) basis. This is a vast improvement over older film cameras that took time to get up to speed (a digital camera is instantly locked to any crystal stabilized speed you select) and then could only maintain that speed for a few seconds before running out of film. When the digital buffer is full, the first image recorded is automatically overwritten. The system continues to overwrite data until it receives a trigger signal such as an optical or audio trigger, switch closure, or a digital TTL trigger such as an alarm or keyboard keystroke.
Depending upon how the system has been configured by the operator, it can save all the images recorded before the trigger signal was received, save everything after the signal came in, or a variable percentage of pre- or post-trigger images. Advanced systems can automatically download some or all of the saved images to a networked hard drive before automatically rearming to await the next trigger signal.
Resolution, or more correctly spatial resolution, must be considered when seeking the ideal system for your specific needs. The best example of why resolution matters is detailed in this real-life scenario. One customer needed to be able to measure within 1/10 in. in an 8.5 ft field of view. Since 8.5 × 12 = 102, the camera would have 102 in. to cover. In order to measure to 1/10 in., it would require 10 times this number, or 1020 pixels.
Figure 3. Fastcam SA-X - megapixel resolution to 12,500 fps.
Developments in motion tracking algorithms enable motion analysis software to track very accurately to about one-tenth of a pixel. However, it is still recommended that whenever possible, you have the full quota of pixels needed to discern what you are viewing. To achieve the desired framing rate (camera speed), you may be forced to sacrifice some of the resolution. The highest resolution that Photron’s Fastcam SA-X can achieve is 12,500 fps at full mega pixel resolution (Figure 3). When selecting a camera, it is important to determine what the pixel resolution is at the speed you require, since all high-speed video cameras reduce the resolution to achieve higher speeds.
The other form of resolution to consider is called bit depth, sometimes referred to as dynamic range. Bit depth refers to how many shades of gray the sensor uses to transition from pure white to pure black. Older systems used 8 bits, which means they utilized 256 steps to transition from white to black. Systems now offer either 10 bits (1024 steps), 12 bits (4096), or even 14 bits (16,384) which are essential for certain advanced applications. For the most part, 8 to 12 bits are more than enough, especially given that Windows is an 8-bit operating system and many times the sensor only produces eight to 10 usable bits, the remainder are lost in noise. In order to fully appreciate those additional two to six bits, you would need to invest in specialized and expensive hardware and displays. The additional bits are useful in mega pixel systems such as the Fastcam SA5 and SA-X and high definition (HD) cameras like the SA2, SA6 and BC2 systems because they offer the ability to select which 8 of the12 bits recorded you display. This can be a very effective means of extracting the maximum detail from shadows or other underexposed areas, or providing an additional means of prolonging the record time.
All high-speed systems should be available in both monochrome and color. They all use the same basic monochrome sensor, but the color versions have a color filter attached, which does sacrifice some light sensitivity, even when microlenses are utilized to maximize the amount of photons falling on the light-gathering part of the sensors’ pixel. Most systems adopt a color matrix known as a Bayer pattern to produce acceptable looking colors, from what is in reality a black-and-white sensor. This simulated color requires three bits to each of the monochrome pixels, which is why color sensors have three times the number of bits; 24 vs. 8 or 30 vs. 10, etc. If you do not have a critical need for color images, it is best to stick with monochrome systems, as they tend to be less expensive as well as more sensitive while providing comparable image quality.
Shuttering and light sensitivity
It is possible to record some high-speed images of a mousetrap closing (Figure 4) at 1000 fps with no additional shuttering, so the effective shutter or exposure time is 0.001 s, but upon closer examination it will be apparent that the trap jaws are quite blurred. One might assume more frames per second are needed. However, there are already sufficient images, but the images are blurred. The solution would be to increase the shutter speed.
Figure 4. Image sequence of a mousetrap.
Shutter speed is often confused with the framing rate, but they are distinctly different. A 35-mm film camera has shutters ranging from seconds to thousandths of a second, but it still takes only one to three pictures per second at most. Similarly, if a high-speed camera is recording at 1000 fps, ideally it is gathering light (exposing the sensor) for 0.001 s. With digital gating electronics, the actual time the sensor is exposed to light can be reduced to microseconds or possibly less. With the mousetrap example, if we keep the record rate at 1000 fps but push the shutter from the reciprocal of the frame rate (0.001 s or 1 ms) to 100 μs, it will reduce the blur by one-tenth.
Do not discount blur; it can be a very important consideration when working with high-speed events, especially projectiles where it can be used to accurately calculate the speed a projectile is moving if the framing rate and shutter exposure time are known. Photron recently won a contract to replace the aging film cameras at a major military test range. The contract required our high-speed digital cameras to be fitted with an automatic exposure control. The shutter needed to auto-adjust to compensate for changing lighting conditions, such as the sun appearing or disappearing from behind cloud cover. One important requirement was that the shutter speed would not be adjusted above a maximum predefined value that had been calculated to ensure the object of interest remained blur-free. In this case, it was better to be underexposed than to have any blur.
In the life sciences arena, light sensitivity is a major concern as high-intensity lighting has a tendency to generate a great deal of heat — not always a good thing when recording animals or insects. Because new CMOS sensors are even more sensitive than their CCD counterparts, they remove the problems of image blooming, (also known as whiteouts, tearing or smearing), where an illuminated hot spot results in a larger vertical streak throughout the image.
Quantifying the sensor’s light sensitivity is an inexact science when using the more familiar ASA/ISO measurement units used to rate 35 mm film, though more and more companies are adopting ISO Ssat method 12232. Make sure any camera’s sensitivity is provided according to a recognized ISO or similar standard on not on one the manufacturer has made up! If the subject is light- or heat-sensitive or the production environment is problematic (e.g. some production lines use light sensors as safety guards and additional lighting can accidentally trigger them), it is best to avoid using anything less than 1200 ISO/ASA for a monochrome camera, or one-third that for color as it is important to note that color sensors are usually one-half to one-third as sensitive as their black-and-white counterparts.
Because of these varying and potentially confusing factors, the best advice for finding the right camera system for the right job is to invite the camera vendors in and insist on a live demonstration. It is easy to demonstrate a system in the conference room, under ideal and controlled conditions, but your purchase should be based on a real-life demonstration with actual conditions in the exact environment in which you’ll need the system to perform.
Considering the end use
The next consideration is the end use. While troubleshooting a gasket manufacturing line, an instant slow-motion review of the high-speed video recorded may be all that is needed. If it is necessary to save a portion of the image sequence for later review and/or analysis, you will need to determine some fundamental issues, such as how to get the images out of the camera’s RAM and into the real world.
The key is to look at your PC and determine what communication protocols it supports. Or you may be able to use the camera’s standard HD-SDI orRS-170 video outputs. Most PCs, including laptops, will include the means to connect to external devices via one or two Gigabit Ethernet ports. This should be the first choice unless you have more complex requirements, such as operating a camera located several miles away, which is often the case in military tests that involve explosives and/or projectiles.
Figure 5. Photron’s PCI-1024, a PC-based, megapixel, high-speed imager.
It is best to stay away from protocols requiring specialized hardware unless your requirements demand them. It is also important to be able to download the images directly into a recognized and immediately usable format (AVI, JPEG, TIFF, etc.) onto a PC of your choice. Some systems require time-consuming post-mission file conversion while others can quickly download to a modified controller, but then take forever to download into the real world.
What type of physical package does your application require? There is a wide variety of systems available, from inexpensive, low-resolution plastic units for almost disposable usage on the production line, to huge systems specifically built for long record times to cover a missile’s launch or re-entry into the Earth’s atmosphere. Some PCI systems use lower cost CCD or supersensitive megapixel CMOS sensors that are made for use in personal computers or laptops. More complex systems require housing that is engineered to operate reliably onboard crash vehicles or near missile impacts. All of these systems have strengths, weaknesses and differences that may influence your decision when considering your particular imaging requirements.
For systems that require use or control via a computer, it is essential to become familiar with the software that is supplied with the camera. The software should be easy to use and intuitive, without requiring a master’s degree in computer science. Some manufacturers, including Photron, will supply their systems with a software developer’s kit, LabVIEW, and MATLAB wrappers to enable advanced users to develop their own interface or integrate camera control into an existing interface.
Request a live demo
As with any relatively new technology, there is a lot of seemingly conflicting information. The main question is: What works best for your needs? The answer is in finding a comfortable fit with your requirements and following this single, important rule: Require the systems manufacturer to demonstrate the camera with a real-life, real-time demonstration, within the actual environment in which the high-speed imaging system will be used. A live, in situ demo will bring out the best and the worst of the high-speed camera systems you review. With that, you’ll have the information you need to make the best choice.