Search
Menu
Videology Industrial-Grade Cameras - Custom Embedded Cameras LB 2024

Leave the Driving to Us

Facebook X LinkedIn Email
Advanced in optical sensors are making the idea of a driverless car closer to reality.

After more than 130 miles across dry lake beds, along winding mountain paths and through the rocky wilderness, Stanley emerged from the Mojave Desert to cross the finish line. Beyond the sweetness of a first-place finish with its $2 million prize, there was the satisfaction in learning that only five competitors had completed the grueling race. Eighteen others had collapsed along the way or found themselves otherwise unable to stay on the course from start to finish.

AutoFeat_collage-whead.jpg

Racing robots herald the advent of driverless vehicles. Fusing photonic and other sensors with image-processing algorithms, five vehicles successfully navigated the DARPA Grand Challenge 2005 course, with the fastest completing the run at an average speed of almost 20 mph. Not every robot finished, indicating that the technology still needs work before hitting the road. Courtesy of Virginia Polytechnic Institute and State University.

The scene was DARPA’s Grand Challenge 2005 in October, an annual contest designed to spur the automation of one-third of US ground combat vehicles by 2015. Stanley is a brainy diesel Volkswagen Touareg R5, one of the robotic cars that may presage an evolutionary leap in transportation.

“We believe that it is a no-brainer that in 50 years all cars will be autonomous,” said Hendrik Dahlkamp, a computer science graduate student at Stanford University in California. Dahlkamp was responsible for Stanley’s vision system as part of the Stanford Racing Team, which was headed by Sebastian Thrun, an associate professor in the computer science department and director of the Stanford Artificial Intelligence Lab.

AutoFeat_stanley.jpg
The robot car called Stanley won DARPA’s Grand Challenge 2005, a grueling race through the Mojave Desert involving driverless vehicles. The annual contest is part of an effort to automate one-third of US ground combat vehicles by 2015. Courtesy of Stanford University.

Computer vision and laser ranging likely will play a significant role in bringing this futuristic scenario to reality. Stanley, for example, used these technologies to find its way along a course for which it had only start, finish and intermediate checkpoints as global positioning system coordinates.

AutoFeat_seeing.jpg
To navigate through its environment, Stanley employed a color camera and a bank of five ladar rangefinders. In this image of Beer Bottle Pass in the race, the red signifies the area identified as “road” by a computer vision algorithm from the camera data, and the region within the blue lines is the area identified as “drivable” by the ladar scanners. Courtesy of Stanford University.

The Stanford team equipped its entry with two photonic sensing systems. One was a color camera that supplied information about the terrain to distances of approximately 80 m. The other was a bank of five ladar scanners from SICK Vertriebs GmbH of Dusseldorf, Germany, that probed some 25 m ahead of the vehicle. These rangefinders provided measurements of distances to objects that were accurate to the inch and therefore served as the means of gauging road conditions at short ranges. Stanley also had a radar sensor, but it was not used in the race, Dahlkamp said.

AutoFeat_planning.jpg
From the camera and ladar information, Stanley generated maps of its surroundings. Here, red areas are regions with obstacles, white areas are regions that are drivable, and gray areas are regions of as-yet undetermined drivability. Using such a map, Stanley produced potential routes through its environment (indicated in green) and selected the one that would encounter no obstacles and that would maximize velocity. Courtesy of Stanford University.

The ladars had data rates of about 13 kB/s, far less than the 500 kB/s provided by the camera. Because of their speed and relatively short sensing distance, the ladars alone could not provide the information Stanley needed to win the race.

The researchers therefore combined the data from the systems. They developed algorithms and techniques so that the more accurate information from the ladar could be applied to the scene captured by the camera, improving the interpretation of the two-dimensional data. This sensor fusion helped Stanley safely speed through the desert at 25 to 35 mph.

Dahlkamp noted that this one-size-doesn’t-fit-all outcome was not unexpected. “The sensor modalities available in robotics have such a variety of strengths and weaknesses that using a single one is never enough,” he said.

AutoFeat_Cliffnoshadow.jpg
Cliff, built by students at Virginia Polytechnic Institute and State University, drove itself for approximately 40 miles along the Grand Challenge 2005 course using both cameras and ladars. Ultimately, the robot’s drivetrain failed. Courtesy of Virginia Tech.

Cliff and Rocky, two entries from Virginia Polytechnic Institute and State University in Blacksburg, employed a similar optical sensor setup, but they were equipped with only three ladars. Two looked immediately in front of the vehicle and provided an accurate, short-range three-dimensional terrain image. A third scanned horizontally for 40 m and simply marked anything it detected as being an obstacle, information that was supplemented using a stereovision camera system from Point Grey Research Inc. of Vancouver, British Columbia, Canada. From these multiple data streams, the robots built a 3-D map of the area with which they discerned the road from a ditch, boulder or other hazard to be avoided.

AutoFeat_rockynocover.jpg
Virginia Tech’s other entry, Rocky (seen here without a cover), used stereovision cameras and three scanning laser rangefinders to map the road ahead. Rocky’s generator failed about 40 miles into the race, and the robot rolled to a stop in the middle of the course. Courtesy of Virginia Tech.

Brett Leedy, a graduate research assistant at Virginia Tech and leader of its Grand Challenge team, said that the group had considered using radar but rejected it because of resolution. The output from a radar system such as those used today in commercial trucks is not a precise shape with sharply delineated features. Instead, he said, it’s more of a blob.

That is not the case with laser rangefinders. “The data from them is very precise,” he said. “It’s really straightforward. That’s the type of information we like to see.”

Leedy pointed out that Cliff and Rocky — a 20-hp Club Car and a four-wheel-drive off-road utility vehicle, respectively — failed to finish the race, but not because of their sensors. Cliff suffered a drivetrain failure, and Rocky was done in by a generator, each about 40 miles into the race. He also said that they would have finished in the allotted 10 hours but for these mechanical problems.

AutoFeat_StopandGo_final.jpg
In poor visibility, radars and penetrating laser rangefinders probe the road ahead for potential hazards, bringing the vehicle to a stop if needed. Courtesy of Delphi Corp.


Meadowlark Optics - Building system MR 7/23
Another nonfinisher was Desert Tortoise, an entry from Intelligent Vehicle Safety Technologies. The group was sponsored by Ford Motor Co., Honeywell International Inc., Delphi Corp. and PercepTek Inc.

Team leader William Klarquist, vice president of engineering at Littleton, Colo.-based robotics company PercepTek, said that the vehicle left the designated course 12 miles into the race and traveled on and off the road for a bit longer. Desert Tortoise, a Ford F-250 Super Duty pickup, eventually was stopped when it had become clear that it was unable to stay on track.

According to Klarquist, the problems did not involve the sensors. As with the other entrants, it used cameras and ladars to probe the environment and select which path to take. A radar system supplied by Troy, Mich.-based Delphi also was part of the sensor suite.

He noted that all cameras can suffer when used outdoors in ambient lighting because there may be a shift from shadow to full sunlight or vice versa, such as when entering or leaving a tunnel, in a single video frame. To handle this effect, designers must perform a juggling act.

“Often, the result is a trade between iris, exposure and gain in the camera to cover the expected dynamic range as rapidly as possible,” Klarquist said.

There currently are no good solutions to this problem, but autonomous vehicles must continue to function safely during those times when the camera effectively goes blind. He added that efforts are under way to solve this problem using high-dynamic-range sensors.

As for the laser rangefinders, the problem is speed. Because stopping distance increases with the square of velocity, autonomous vehicles must see as far as possible. The approach adopted by many of the teams was to employ multiple units, with different ones assigned to scan at different resolutions and distances.

In a flash

To enable an autonomous vehicle to cruise at highway speeds, the distance perceived by its sensors must increase by four to nine times over what the participants in last year’s race achieved. If a single sensor is to be used, it not only must capture information rapidly, but also must do so at great distances.

To accomplish this, new photonic technologies may have to be brought into play. At the National Institute of Standards and Technology in Gaithersburg, Md., supervisory engineer Maris Juberts has been investigating flash ladars. These typically employ brief laser pulses for illumination and use a focal plane array detector to measure time-of-flight information. Flash ladar systems thus determine the distance to objects on a pixel-by-pixel basis — in effect, producing a 3-D map of everything in their field of view. Scanning is not necessary, so the technique is fast.

“The potential there is for much longer-range imaging and basically providing freeze-action snapshots of a scene in three dimensions at very fast frame rates,” Juberts said.

A flash ladar that images a flying helicopter can capture data so quickly that the spinning blades appear frozen and unblurred. However, Juberts said, compact flash ladars that are small enough and inexpensive enough for use on a standard car are not commercially available. If and when the technology is ready, it is likely to involve small angles and short ranges. Flooding a wide angle of view to a long distance requires high laser powers, and that would be expensive.

Advanced Scientific Concepts Inc. of Santa Barbara, Calif., produces flash ladar systems. Today, these are custom-made, mostly for aerospace applications, said Roger Stettner, its president. For illumination, the systems use a 1570-nm laser beam derived from an Nd:YAG and optical parametric oscillators. For a detector, they employ various optics and a unique processing pixel array that is hybridized to a 128 × 128-pixel InGaAs array. Using a single laser and proprietary algorithms, the systems can see through dust and smoke. The current maximum ranging distance for a handheld system is 1 km in clear air.

However, at $200,000 each, these systems are not cheap, and they are unlikely to ever be inexpensive enough for use on the family car without significant modifications to the design or manufacturing process.

“Without any changes in the technology, in large quantities, we could drop the price down to about $20,000 per system,” Stettner said.

That, he noted, assumes no decrease in the detection distance. Cutting back from 1 km to 100 m would result in significant savings in terms of the laser. However, with most of the cost in the detector array, it is unclear whether it would drop the cost to a few hundred to a thousand dollars, needed for the move into a mass consumer market.

Let me help you with that

Yet mass-market companies are looking into these technologies, mostly to assist the driver. For example, Audi Electronics Venture GmbH of Ingolstadt, Germany, a subsidiary of Audi AG, has investigated the use of a time-of-flight 3-D sensor from PMD Technologies GmbH of Siegen, Germany, for such things as stop-and-go assistance, precrash detection and spotting pedestrians.

Huan Yen, manager of Delphi’s advanced infotainment systems, said that the company plans to use radar for passenger cars, mostly because it is not affected by rain, fog or other inclement weather. Such conditions play havoc with laser-based systems. In addition, radar easily sees other vehicles.

Nonetheless, ladars are being considered for passenger automotive applications by other companies, and Delphi predicts that optical systems will be deployed in this sector. “We believe cameras will play a big part in future vehicles, primarily for safety applications,” Yen said.

One example might be for lane-departure warnings, in which a camera will look for the white stripes that mark a lane and the system will alert the driver if the car strays too close to or veers across the stripes. Likewise, the output from a camera can be used to improve radar results. The radar might indicate an obstacle ahead, but the camera would provide the information that the road curves around it. In that case, there would be no need to warn the driver of an impending collision.

The problem is that conditions are imperfect in the real world. For example, some lane markings are new and clear, while others are old and faded, and the system may encounter both in the span of a few seconds. In snow or heavy rain, it might be hard or impossible to spot the markings at all.

It is unlikely that advances in sensor technology alone will overcome this. The key, Yen said, is to build intelligence into the system through sensor fusion and data processing to enable it to deal with situations that might arise.

AutoFeat_Delphi_safetycocoon.jpg
Using cameras, radar and other sensors, driver-assist systems provide 360° of protection around a vehicle, forewarning of collisions in front or at the sides and providing an added level of safety when the car is in reverse. Courtesy of Delphi.

Cameras also could play a role in handling perhaps the most complex variable in a driver-assist system: the human being behind the wheel. When a hazard emerges, the driver may be paying attention to the road, be fiddling with the audio system, talking on the phone or trying to keep the toddler in the backseat happy. In the first case, it may not be necessary to issue an alert; in fact, doing so may make an accident more likely. But even in the others, circumstances determine the appropriate sort of alert for the system to issue.

“If the situation warrants it, you do want to warn,” Yen said. “Sometimes you want to warn earlier even.”

In making that decision, the system might employ an infrared camera trained on the driver’s face to ascertain his or her degree of attention by day or night. This information would help the system decide when and whether assistance is needed.

Just relax

Stanford’s Dahlkamp has no problem with this picture of driver assistance in the near term. He predicts a future in which drivers increasingly will be assisted and relieved of boring tasks.

Stanley, the race-winning robot, can be thought of as the ultimate in assisted driving. “The driver has to worry about nothing and can relax,” Dahlkamp said.

Published: April 2006
Glossary
computer vision
Computer vision enables computers to interpret and make decisions based on visual data, such as images and videos. It involves the development of algorithms, techniques, and systems that enable machines to gain an understanding of the visual world, similar to how humans perceive and interpret visual information. Key aspects and tasks within computer vision include: Image recognition: Identifying and categorizing objects, scenes, or patterns within images. This involves training algorithms...
computer visionConsumerdata ratesenergyFeaturesglobalindustrialSensors & Detectors

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.