Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Technology: Machine Vision

Hank Hogan, hank@hankhogan.com

A few years ago, microprocessors hit a speed limit. Cranking up the clock was no longer feasible, courtesy of some basic physics. However, customers expected increasing computing power.

The industry’s solution was to put two or more processing cores into a single chip. Today, four- and eight-core devices are available. Soon there will be chips with tens and hundreds of cores.

The result could be a significant boost to machine vision. Matt Slaughter, product marketing engineer for vision at Austin, Texas-based National Instruments, said a two-core chip has almost double the vision processing capability of one with a single core, while a four-core chip offers about 3.5 times the power.


One core good; many cores better. Intel’s research chip has 80 cores, making it the first to deliver more than a trillion calculations per second of performance. Courtesy of Intel Corp.


“The nice part is it’s almost a linear benefit,” said Slaughter of the increase in computing power versus the number of cores.

That performance prediction comes with a caveat, though. The software must be properly designed. Successfully using a multicore approach requires that a task be divided or copied, with each segment worked on by a separate core.

It’s typically easier to do this with vision because often the same processing is done on each subsection of pixels. However, if two cores need the same resource or algorithm simultaneously, one core might have to sit around waiting for the other to finish.

Vision software companies are aware of this constraint and have been implementing changes to accommodate it. National Instruments, for example, has been making its algorithms re-entrant so that multiple cores can access the same code.

Other constraints

Since 2000, MVTec Software of Munich, Germany, has offered what it calls automatic operator parallelization. Marketing and communications manager Lutz Kreutzer said it makes no difference to the company’s software how many cores there are. It will automatically use whatever is available, with best use of a high number of cores occurring when the method requires a lot of processing power.

But, he noted, being able to do so depends on the memory and the architecture of the computer, since the processor has to have data to act on. “If not, the CPUs will be idle, and therefore we cannot gain the maximum speed up,” Kreutzer said.

Santa Clara, Calif.-based Intel estimates that some high-performance applications will require 2 to 4 GB of memory per core. Acquiring that much memory on certain board configurations could be difficult, particularly if a chip has tens or hundreds of cores.

Solving these and other problems before they become roadblocks is the subject of ongoing research.

For example, Intel and Microsoft are working with the University of Illinois at Urbana-Champaign and the University of California, Berkeley, on overcoming scalable software programming challenges. In November 2008, Georgia Institute of Technology in Atlanta established a Center for Manycore Computing to tackle multicore issues.


Taking full advantage of multicore processors requires software changes. Several image processing functions of the NI Vision Development Module, such as the image convolution function, have been redone to distribute processing automatically across cores. Courtesy of National Instruments.


Such investments won’t be wasted, noted Jim St. Leger, platform technology marketing manager for Intel’s embedded and communications group. Multicore is likely to be the standard from here on out, he said, despite continuing materials research. “I would struggle to see something be so fundamental that it would leapfrog where we’re at today with the multicore devices.”

Even if a breakthrough were to happen, he added, eventually physics would again intervene, and another limit would be reached. In that case, what would by that time be an old solution would again be employed. “At that point, then you take that device and do a multicore implementation,” St. Leger said.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media