Search
Menu
PI Physik Instrumente - Fast Steering Mirrors LW 16-30 MR

Technology: Machine Vision

Facebook X LinkedIn Email
Divide and Conquer

Hank Hogan, [email protected]

A few years ago, microprocessors hit a speed limit. Cranking up the clock was no longer feasible, courtesy of some basic physics. However, customers expected increasing computing power.

The industry’s solution was to put two or more processing cores into a single chip. Today, four- and eight-core devices are available. Soon there will be chips with tens and hundreds of cores.

The result could be a significant boost to machine vision. Matt Slaughter, product marketing engineer for vision at Austin, Texas-based National Instruments, said a two-core chip has almost double the vision processing capability of one with a single core, while a four-core chip offers about 3.5 times the power.

IMmulti_TeraflopsChip.jpg
One core good; many cores better. Intel’s research chip has 80 cores, making it the first to deliver more than a trillion calculations per second of performance. Courtesy of Intel Corp.


“The nice part is it’s almost a linear benefit,” said Slaughter of the increase in computing power versus the number of cores.

That performance prediction comes with a caveat, though. The software must be properly designed. Successfully using a multicore approach requires that a task be divided or copied, with each segment worked on by a separate core.

It’s typically easier to do this with vision because often the same processing is done on each subsection of pixels. However, if two cores need the same resource or algorithm simultaneously, one core might have to sit around waiting for the other to finish.

Vision software companies are aware of this constraint and have been implementing changes to accommodate it. National Instruments, for example, has been making its algorithms re-entrant so that multiple cores can access the same code.

Other constraints

Since 2000, MVTec Software of Munich, Germany, has offered what it calls automatic operator parallelization. Marketing and communications manager Lutz Kreutzer said it makes no difference to the company’s software how many cores there are. It will automatically use whatever is available, with best use of a high number of cores occurring when the method requires a lot of processing power.

Alluxa - Optical Coatings MR 8/23

But, he noted, being able to do so depends on the memory and the architecture of the computer, since the processor has to have data to act on. “If not, the CPUs will be idle, and therefore we cannot gain the maximum speed up,” Kreutzer said.

Santa Clara, Calif.-based Intel estimates that some high-performance applications will require 2 to 4 GB of memory per core. Acquiring that much memory on certain board configurations could be difficult, particularly if a chip has tens or hundreds of cores.

Solving these and other problems before they become roadblocks is the subject of ongoing research.

For example, Intel and Microsoft are working with the University of Illinois at Urbana-Champaign and the University of California, Berkeley, on overcoming scalable software programming challenges. In November 2008, Georgia Institute of Technology in Atlanta established a Center for Manycore Computing to tackle multicore issues.

IMmulti_9772_ill.jpg
Taking full advantage of multicore processors requires software changes. Several image processing functions of the NI Vision Development Module, such as the image convolution function, have been redone to distribute processing automatically across cores. Courtesy of National Instruments.


Such investments won’t be wasted, noted Jim St. Leger, platform technology marketing manager for Intel’s embedded and communications group. Multicore is likely to be the standard from here on out, he said, despite continuing materials research. “I would struggle to see something be so fundamental that it would leapfrog where we’re at today with the multicore devices.”

Even if a breakthrough were to happen, he added, eventually physics would again intervene, and another limit would be reached. In that case, what would by that time be an old solution would again be employed. “At that point, then you take that device and do a multicore implementation,” St. Leger said.

Published: January 2009
Communicationsmicroprocessorsphysicssingle chipTrends

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.