Search
Menu
Hamamatsu Corp. - Earth Innovations LB 2/24

Emerging Applications Drive Image Sensor Innovations

Facebook X LinkedIn Email
Demand for sensors in automobiles, the Internet of Things, and AI is challenging vendors to integrate systems and improve performance.

HANK HOGAN, CONTRIBUTING EDITOR, [email protected]

CMOS sensors are being taken for a ride. Forecasts call for image sensor market growth to be driven by automobiles, with increasing use of lidar, cameras, and sensors. To not hit a stop sign, though, vendors need to up product performance and cut costs. There also are other emerging markets, such as the Internet of Things (IoT), and technologies, including artificial intelligence (AI), that have their own demands.

Vendors are responding by increasing sensor spectral range, integrating new capabilities into devices, and adding features such as 3D imaging. The result can be rapid growth in sensors, even in areas that are relatively stable. For instance, the worldwide market for cars is expanding at a relatively modest pace, according to Geoff Ballew, senior director of marketing in the automotive sensing division of chipmaker ON Semiconductor Corp. of Phoenix.

Because cars can encounter large contrast scenes, automotive applications demand high-dynamic-range image sensors, with those capturing images to be seen by a driver also needing LED flicker mitigation.

Because cars can encounter large contrast scenes, automotive applications demand high-dynamic-range image sensors, with those capturing images to be seen by a driver also needing LED flicker mitigation. Courtesy of Omni/Vision Technologies Inc.


However, tepid growth is not the case for the automotive imaging solutions. “The number of sensors consumed and attached to those cars is growing wildly,” he said. “The image sensor chip business is growing in excess of 15 to 20 percent a year. The reason for that is cameras increasingly are adding new functionality to cars.”

An example of new functionality is active braking. These systems use cameras to scan the road ahead to spot obstacles and then brake if needed. Other examples include adaptive cruise control or lane departure warning systems.

Some of these new capabilities are mandated by governments, such as the backup cameras that became a requirement for all new cars in the U.S. as of Jan. 1, 2018. Importantly, this directive makes it easier to implement another safety feature: a surround view of the entire car. This requires the use of four or more cameras, with the system stitching together data so that the driver gets a single, unified, top-down picture of what’s nearby. Because of the backup camera edict, only three cameras must be added, lessening the incremental cost of implementing surround view.

Regulations also could up the automotive sensor attach rate in other ways. Consider fuel economy standards, which may rise significantly over the coming years. To meet that challenge, carmakers are looking at everything, including the wing or side mirrors.

“They stick out. They disturb the aerodynamics of the vehicle, and they actually have a meaningful impact on the fuel economy,” Ballew said. “People are asking, could we replace those with cameras, very small and very streamlined, and then put a display in the vehicle for the driver.”

Once these cameras are in place, the approach could have cascading benefits beyond improved fuel economy, he said. Unlike a mirror, the cameras could perform machine vision tasks, such as detecting an object. Cameras can enhance contrast as well, improving visibility at dusk, at night, or in fog and rain. The ability to adjust the image or add intelligence to the system could boost functionality, which, in turn, could further increase the use of sensors.

Considerations and constraints

Vehicle image sensors fall into two broad categories, said Andy Hanvey, head of automotive marketing at digital imaging technology and product maker OmniVision Technologies Corp. of Santa Clara, Calif. Some of the sensors, such as those in a backup camera, gather images to present to a driver. The other group provides data for lane departure warning systems, driver attentiveness monitoring, and the like, with captured images only intended for machine systems.

Vehicle applications are driving sensor innovations, such as this CMOS image sensor capable of running at 60 fps while simultaneously delivering ultrahigh dynamic range and LED flicker mitigation.

Vehicle applications are driving sensor innovations, such as this CMOS image sensor capable of running at 60 fps while simultaneously delivering ultrahigh dynamic range and LED flicker mitigation. Courtesy of ON Semiconductor Corp.


There are common requirements for both sets of sensors, with some significant differences compared to other consumer applications. One is the dynamic range, or the ability of the sensor to capture information in scenes with a large contrast. A car may, for instance, come out of a dark tunnel or parking garage into bright sunshine. When that happens, the image sensor needs to provide a clear picture to the driver, an onboard system, or both during this transition.

“You want to cover a scene between bright and dark,” Hanvey said. “Typically, in automotive, that level needs to be 120 dB. You need to use some imaging techniques to expand the dynamic range.”

Example of a surround view system (SVS), which merges input from multiple sensors to present an image to the driver of a vehicle’s surroundings.

Example of a surround view system (SVS), which merges input from multiple sensors to present an image to the driver of a vehicle’s surroundings. Courtesy of OmniVision Technologies Inc.


In contrast, a mobile application might have a dynamic range of 70 dB, he added. Thus, the maximum to minimum ratio for light intensity in the automotive case is over 300× that of the mobile situation.

What’s more, the operating temperature range for a consumer application is typically 0 to 85 °C. Automotive sensors are expected to work from -40 to 125 °C. That interacts with the dynamic range requirement because as the operating temperature rises, so too does the dark current in the sensor. Vendors such as OmniVision must take special care within the manufacturing process to drive that dark current down, thereby expanding the operating temperature and preserving the high dynamic range.

Cognex Corp. - Smart Sensor 3-24 GIF MR

Care must also be taken within design, manufacturing, and testing of the sensors for another reason: achieving the required safety level. An active braking system should not spring to life at the wrong time and should become active when needed. Opportunities for mistakes must be minimized to ensure the safest possible operation.

A final challenge for sensors is cost. The automotive market may be expanding rapidly, but there is a premium on producing quality parts as efficiently as possible. A single sensor with many capabilities can help achieve that, even if that sensor is somewhat more expensive.

Besides automotive, another area pushing imaging capability is the IoT. Refrigerators, washing machines, and home security systems are adding image sensors for cataloging food, recognizing people, and other tasks. But the IoT brings its own requirements, and they affect sensors, according to Nick Nam, head of emerging markets at OmniVision Technologies.

For instance, power consumption often may need to be minimized, particularly for IoT applications running on batteries. One solution is for the sensor to go into a power-down mode for those long stretches of time when nothing is happening. When that changes and movement or another trigger is detected, the sensor can go back into full video mode, capturing images at 30 fps and do so faster.

Another constraint with image sensors may be the network. In an IoT application, any images gathered may be transmitted for analysis, storage, or further processing. The amount of data produced by a sensor is a function of frame rate, color, any depth information, and resolution. While other applications are upping resolution, that is not always possible for emerging IoT applications.

“The IoT is limited in increasing the resolution because of the bandwidth of one’s network,” Nam said, noting that the latest in high-resolution imaging at a high frame rate can generate gigabytes of data daily. “It’s a lot of data, and most don’t have the bandwidth for that.”

A hybrid image sensor, which uses a CMOS readout along with nonsilicon and possibly silicon photodetectors, can extend the spectral range beyond the visible. This hyperspectral imaging technique allows the extraction of additional information from objects, increasing the number of materials that can be detected and categorized.


A hybrid image sensor, which uses a CMOS readout along with nonsilicon and possibly silicon photodetectors, can extend the spectral range beyond the visible. This hyperspectral imaging technique allows the extraction of additional information from objects, increasing the number of materials that can be detected and categorized. Courtesy of Teldyne DALSA.

Depth or 3D sensing is a capability being added to automotive, the IoT, and other applications. There are competing 3D imaging methods, and which is best will be different for different situations. However, CMOS imaging technology is fundamental to many of them, according to Eric Fox, sensor products business line director for sensor chip and imaging system maker Teledyne DALSA of Waterloo, Ontario, Canada. Thus, there may be added performance pressure on CMOS imaging technology.

UV and IR performance

In many of these emerging applications, pixel count is not touted because resolution is not used as a selling point to consumers, Fox said. What matters is performance, and that leads to another important sensor trend: a push into spectral regions other than the visible. This includes both the shorter wavelengths of the UV and the longer ones of the IR.

The latter presents a problem for traditional CMOS imager technology because silicon can only function as a photodetector out to about 1.1 µm at best, Fox said. With applications increasingly wanting image information well beyond that, vendors have developed and deployed hybrid CMOS image sensors. In this approach, manufacturers build everything but the photodetector using standard CMOS processing and materials. This is roughly 90 percent of the finished sensor. Then they marry this to a simple photodetector built out of a material that is responsive in the IR, such as indium gallium arsenide (InGaAs).

This approach works very well out to 1.7 µm. Trickery in the material growth can push that out to 2.5 µm, but at a quality and cost hit.

“Barring the InGaAs material itself, it really is just a CMOS image sensor,” Fox said of the hybrid. “What differs is that you’ve replaced the silicon photodetector with something that is absorbing in the IR.”
Some of the sensors, such as those in a backup camera, gather images to present to a driver. Others provide data for lane departure warning systems and driver attentiveness monitoring.

Imaging in the shortwave IR region out to about 2 µm offers improved performance in poor visibility or at night. When combined with capabilities in the visible and UV, the resulting multispectral or hyperspectral imaging can provide important information not obtainable by visible imaging alone. While not new, the hybrid approach offers the advantage that as CMOS technology improves, so can the performance of the sensors. What’s more, the hybrid technique can be extended to other materials, allowing sensors to capture information in the mid- and thermal-IR at 5 or 10 µm, or more.

A final trend that will affect sensors cuts across all applications. Because of improved hardware that allows more processing in a small package near the camera, AI and other image processing improvements will, in the future, demand more and more data. In turn, that will force sensors and their makers to respond.

According to Fox, “That will continue to push on frame rate and number of pixels.”

Published: August 2018
Glossary
lidar
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here's how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
Sensors & Detectorsimage sensorslidarIOT3D sensingCMOSimage sensor chipON SemiconductorInGaAsFeatures

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.