Search
Menu
Lumencor Inc. - Power of Light 4-24 LB

Imaging Trends: A Vision of the Future

Facebook X LinkedIn Email
Marie Freebody, Contributing Editor, [email protected]

Vision systems can be as important to manufacturing businesses as the manufactured products themselves. For products ranging from semiconductors to solar cells, and from pharmaceuticals to vehicles, vision systems help ensure quality and maximize production line efficiency, which means cost savings.

Global sales revenue from machine vision systems in North America was valued at $889.3 million in 2009, according to the latest market study by the Automated Imaging Association (AIA).

The semiconductor industry tops the list of users both in terms of revenue and units sold. This is followed by the automotive and wood industries. While vision systems have enjoyed a strong presence in these manufacturing areas for many years, they are now starting to catch the attention of nonmanufacturing enterprises.

“Because of the attractiveness of the machine vision value proposition – increased efficiency, improved product quality and lower costs – machine vision technology will find success in a number of nontraditional and emerging markets,” said Paul Kellett, AIA director of market analysis. “As time goes on, I fully expect machine vision companies to increasingly target nonindustrial customers – that is, to focus beyond the factory in their sales campaigns.”

From economic lows to tech highs

The recent economic downturn hit the manufacturing industry hard and, as a result, the machine vision market suffered too. In fact, revenue declined by 29.4 percent from 2004 to 2009, according to AIA’s study.

Even though the greater economy may be experiencing a recovery, most economists view the comeback as tentative and weak. What’s more, many feel that a double-dip recession cannot be ruled out.

Despite this, Kellett is cautiously optimistic about machine vision. “While there is some evidence that the ‘Great Recession’ interrupted to some extent long-term trends (as customers bought down in response to shrunken capital budgets), I strongly suspect that the most recent sales data will show a return to the long-term trends,” he said. “Our new AIA sales tracking report shows that machine vision system sales in North America are up an average of 46 percent in the first half of 2010 over the first half of 2009.”

Economic pressure aside, the vision market has historically been driven by technological trends. One such technology that has evolved noticeably in 2010, and is expected to continue into 2011, is the camera – the eyes of the vision system.

The trend here is increasingly digital, faster and higher resolution. As users call for better resolution and speed, demand for greater bandwidth intensifies. As such, the migration from analog to digital and between different digital interfaces continues as users move up the “bandwidth ladder.”

The adoption of the GigE Vision standard in May 2007 represented a major market milestone; in 2009, only 14.2 percent of total machine vision camera units and 23.2 percent of revenue were due to prestandard GigE cameras – those that purported to have a GigE interface but that did not officially conform to the GigE standard, since it had not yet been officially adopted.

Industry experts expect the GigE Vision camera and its extension, 10 GigE, to gain an increasing share of total machine vision camera sales over time but not to supplant all other types of cameras.

Another trend is the growing reliance on field-programmable gate arrays within digital cameras to perform image preprocessing tasks. Some say that this trend is eroding the distinction between the camera and a relatively new product: the smart camera.

Smart cameras combine the functionality of a machine vision system with lower prices and have, not surprisingly, experienced explosive growth. Typically, a smart camera houses a camera, an image processing unit and machine vision-related programs within the same casing.

When smart cameras were invented, their capabilities were somewhat limited. Their lack of tools and processing power meant that they were generally suitable only for a narrow range of simple tasks. Typically, these early smart cameras were used for detecting the presence or absence of items but were not particularly “smart” by today’s definition.

Currently, smart cameras are the fastest growing sector of the machine vision market. And such products are opening the door to some of the emerging nonmanufacturing applications of vision systems, such as biometric recognition and intelligent surveillance.

Smarter surveillance

Video surveillance has moved on from the indiscriminate capture of images to more intelligent traffic surveillance systems, for example. Smart surveillance systems can monitor the flow of traffic, alert authorities to possible accidents and identify number plates.

Perhaps the next step in intelligent surveillance is a system with a humanlike ability to learn from the environment it oversees. In the SEARISE (Smart Eyes: Attending and Recognising Instances of Salient Events) project, which comes to fruition in February 2011, a trinocular active cognitive vision system is being developed.


The Smart Eyes system picks out two possible salient events during a soccer match. Waving flags (green box) are deemed not to be security relevant, whereas the person on the edge of the pitch (red box) is determined to warrant more attention. Courtesy of the SEARISE partners.


Unlike other approaches in video surveillance, the Smart Eyes system will be able to learn continuously from visual input. It will self-adjust to changes in the visual environment as well as concentrate its focus on salient events. Saliency is a measure of relative novelty, so a salient event is one that stands out from its surroundings.

Since not all salient events will be security relevant, the project members have developed software to help the system “learn” to focus only on security-relevant events. SEARISE software can learn from a few video samples of security-relevant events specified by experts.

The software then analyzes all salient events in the scene to detect the specified security events; it can update its knowledge about the learned events via automatic online learning. Additional interactive learning allows new security-relevant events to be incorporated into the previously learned model. These events will then be categorized according to context, and the smart camera will duly assign the majority of computational resources to the informative parts of the scene.

The aim of the project is to develop a prototype of Smart Eyes and put it to the test in real-life scenarios featuring the activity of people at varying distances from the camera. At a long-range distance capacity, the system will monitor crowd behavior of sports fans in a soccer arena. At short range, the system will be tasked with monitoring the behavior of small groups of people and single individuals.

The technology behind Smart Eyes consists of three cameras acting in unison in a coordinated “recognition loop.” A wide-view global camera performs general monitoring and, once a particular event of interest is identified, active binocular stereo cameras zoom in for a closer look.

The SEARISE Project has been supported by the European Union’s FP7 Programme and is made up of a team of academic and industrial partners across Europe. The smart vision cameras can be used for security applications in public places such as sports stadiums, city streets, metro systems and airports.

Driving vision into autos

Still another trend is lower cost, as subcomponents become less expensive. As a consequence, vision systems are offering increasing value and appeal in commercial markets. For example, vehicle manufacturers have discovered the benefits of thermal imaging to enhance driver vision.

Of course, for many years the military and police have used thermal imaging for night vision for security and surveillance applications. In fact, in places where constant guarding is necessary, such as airports, ports and nuclear facilities, thermal imaging will often be employed.

Thermal imaging specialist Flir Systems recognizes that, thanks to trends toward more compact systems, better image quality, more powerful software and falling costs, thermal imaging cameras are making their way into the mainstream.


Automotive night-vision systems enable drivers to detect and monitor potential hazards on or near the road, allowing more time to react to danger. The PathFindIR thermal imager helps the user recognize pedestrians, animals or objects in total darkness, smoke, rain and snow. Images courtesy of Flir Systems.


“Thermal imaging cameras are becoming more and more popular for a wide variety of applications,” said Christiaan Maras, Flir Systems’ marketing director for Europe, the Middle East and Africa. “Volumes are going up, and prices are coming down. This will help thermal imaging make its way to numerous users and applications for seeing in the dark, measuring temperatures and so on.”

Flir Systems’ PathFindIR thermal cameras are already sold as an option on selected BMW series 5, 6 and 7 models for enhanced driver vision. The PathFindIR incorporates an uncooled 320 x 240-pixel microbolometer and a 19-mm wide-angle lens to provide a 36° field of view. Measuring just 5.8 x 5.7 x 7.2 cm and weighing 360 g, the compact camera is installed behind the vehicle’s grill.

BAE Systems Sensor Solutions - Fairchild - FS Sensor Solutions 4/24 MR

Arming vehicles with vision technology is becoming big business. As featured in the October 2010 issue of Photonics Spectra (“On the road with Junior: A tale of optics and driverless cars,” pp. 34-37), Volkswagen and the Stanford University School of Engineering are working together as part of the Volkswagen Automotive Innovation Laboratory (VAIL). The group is looking into ways of transforming the information from cameras to provide details of a vehicle’s surroundings in 3-D.

To do this, VAIL is exploiting lidar data to better interpret the images provided by cameras. Being able to use cameras alone for accurate information about distance and shape will make it easier to integrate vision technology into commercial vehicles. Furthermore, vehicle manufacturers can take advantage of the fast frame rates, higher resolution and lower cost offered by today’s cameras.


The Flir Systems PathFindIR is a compact thermal imaging camera that reduces the hazards of nighttime driving. Courtesy of Flir Systems.


Lights, camera, action

With the trend toward nonindustrial applications, vision systems are being taken out of the factory setting – which means that new considerations come into play. For example, outdoor environments with their lighting variables and temperature changes can pose a problem for imaging and vision manufacturers, and they take on more importance in camera performance.

For John Merva of Advanced Illumination, based in Rochester, Vt., getting the correct lighting figured out is the most important thing.


Advanced Illumination’s new expandable spot/linear array features high-brightness LEDs and an extruded aluminum housing with built-in mounting features and an efficient heat transfer design. Images courtesy of Advanced Illumination.


“The image seen in the camera is light reflected from the object. Incorrect lighting equals a bad image,” Merva said. “Vision has moved from complex, high-cost products to all parts of the consumer product market and, as always, a proper lighting solution saves on the cost of a more expensive processing solution.”

Incorrect lighting can lead to serious imaging problems such as blooming, hot spots and shadowing, which can hide important image information. What’s more, nonuniform lighting can result in a low signal-to-noise ratio.

Selecting the proper illumination is no easy task. The solution often appears to be a mixture of science and art. However, Merva points out that light is governed by the laws of physics, and solutions are based on scientific principles. Leading lighting suppliers provide an application evaluation and recommendation service free of charge. To help customers make better decisions, Advanced Illumination even offers free education seminars related to the application of lighting.


Advanced Illumination’s new 6-in.-diameter low-angle dark-field light. Courtesy of Advanced Illumination.


Lighting vendors also face a numberof customer-related challenges since one lighting solution does not fit all applications, and customers will often require a solution in just one to two weeks. Merva advises that the best way to avoid a last-minute crisis is to seek the proper lighting solution at the inception of a project.

“As I review sales of our various products, I see a clear trend toward lights that have higher optical output and more complex, purpose-driven designs,” Merva said. “These lights allow delivery of very unique geometries of intense light.”

Perhaps the most obvious trend that is taking imaging and vision applications by storm is the tenacious rise of LEDs. LEDs have almost doubled their share of machine vision lighting units sold from 52.4 percent in 2005 to 99.8 percent in 2009, according to AIA’s market study. The battle of technologies has been clearly won by LEDs, which have supplanted all other lighting options, except in special cases, the report details.

AIA’s latest annual study also warns that, with the LED now the undisputed king, the overall strength of the machine vision lighting market will rest primarily on the average price and number of LEDs sold. If the average price of LEDs continues to drop, suppliers must sell an increasing number of units to offset the impact on total market revenue.

But it’s not just the price of LEDs that lighting vendors must adapt to; LED sources have evolved to allow high-power output in a variety of wavelengths. This increase in optical output contributes to high LED die temperatures, which is one of the more significant factors that leads to early failure of the LED.

“Cold environs are not a problem for LED light sources, as they only assist in heat removal. However, hotter temperatures do provide a need for additional heat transfer measures,” Merva said. “Advanced Illumination has paid considerable attention to LED die temperature and effective heat transfer. The results of this can be observed in our product designs which incorporate extensive heat transfer features. This provides customers with LED illumination products that can operate for tens of thousands of hours in hotter environments.”

Merva also notices a trend toward applications that employ line-scan cameras for inspection of textiles, films, semiconductors, lumber, roads and rails. Line configurations represent the most popular lighting geometry at 29 percent of the units sold and 29.3 percent of the total sales revenue, AIA reports.


Advanced Illumination’s newest high-intensity line light for line-scan applications. Also available as a backlight, it features remotely adjustable intensity with onboard current drive and passive cooling. Available in expandable sizes up to 96 in. Courtesy of Advanced Illumination.


In comparison, area lighting represents 15.1 percent of the units sold and 15.3 percent of the revenue. This is followed by backlighting, which comprises 10.2 percent of units sold and 13.7 percent of total revenue.

Computer vision: There’s an app for that


Vision technology is not just about improving image capture; there is also a lot happening behind the camera in terms of image processing. Computer vision may be in its infancy, but smart phone users are already taking advantage of advances in this field, thanks to a new application from Google.


Google Goggles at work identifying Jackson Pollock’s Convergence as well as the cover of Jack Kerouac’s On the Road. Images courtesy of Google.

We are all familiar with searching the Internet for information by typing words into a browser or, more recently, speaking phrases into a smart phone. Google Goggles has provided Android users with a visual search tool since December 2009, which was then released to iPhone users in September 2010.

By simply capturing an image of an object of interest, Google will analyze the photo for a few seconds and then offer some relevant search results.

The service is in its infancy, and it currently works best with common landmarks, businesses, bar codes and products such as books, movies and wine bottles. However, Google hopes to increase the number of objects that can be recognized and provide better and more relevant responses when there is a match.

“Goggles is a result of computer vision research work done by our team over many years,” said Shailesh Nalawadi, a Google Goggles product manager. “We felt it was only natural that users should also be able to search for information by pointing to the object of interest using their smart phones to visually capture information. Over time, we feel this can become a very natural way for users to find out about items in their physical surroundings.”

The Goggles team has created a database of recognized objects and associated metadata about the objects. Each recognized object is converted to its mathematical representation and stored in a server.

When a user selects Goggles to capture a query image, the image is compressed on the smart phone and transmitted to Google’s recognition servers in the cloud. The recognition servers then convert the query image to a mathematical representation, which is compared with all previously recognized objects in the database in a matter of seconds.

“We compile all the matched images and the associated keywords and stream it back to the user’s handset,” Nalawadi said. “The user can select the result that most closely matches the query, and we render search results for the match.”

If computers can be taught to reliably “see,” Nalawadi predicts a revolution in the way humans interact with the world around them.

“Imagine walking through a forest while having access to information about all the plants and animals around you. Or getting your computer to suggest the next move in a chess game, or being able to pull up product information by pointing the phone at a cereal box in the grocery store aisle,” he said. “The possibilities are immense.”

Published: January 2011
Glossary
gige
GigE, short for gigabit Ethernet, refers to a standard for high-speed Ethernet communication, capable of transmitting data at rates of up to 1 gigabit per second (Gbps), or 1000 megabits per second (Mbps). GigE is an extension of the ethernet family of networking technologies, which is widely used for local area network (LAN) communication in homes, businesses, and data centers. Key features and characteristics of gigabit Ethernet include: Speed: GigE offers significantly higher data...
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
thermal imaging
The process of producing a visible two-dimensional image of a scene that is dependent on differences in thermal or infrared radiation from the scene reaching the aperture of the imaging device.
. Google GogglesAdvanced IlluminationAndroidAutomated Imaging AssociationcamerasChristiaan MarasConsumerdefenseenergyFeaturesFlir SystemsGigEGoogleImagingindustrialiPhoneJohn MervaKellettLight Sourcesline scan camerasmachine visionMarasMarie Freebody.Nalawadinight visionPathFindIRPaul KellettSEARISE projectShailesh Nalawadismart camerassmart eyesStanford University’s School of EngineeringTest & Measurementthermal imagingtrinocular active cognitive vision systemVAILvideo surveillancevision systemsVolkswagonVolkswagon Automotive Innovation LaboratoryLEDs

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.