Search
Menu
Gentec Electro-Optics Inc   - Measure With Gentec Accuracy LB

On the Road with Junior: A tale of optics and driverless cars

Facebook X LinkedIn Email
Gary Boas, Contibuting Editor, [email protected]

Once accessible only to the likes of David Hasselhoff and Batman, autonomous driving is now finding its way into a range of consumer vehicles. Here, in following a car named Junior and its various progeny, we trace the recent history of driverless vehicles and the advances in optics and sensor processing that have made these vehicles possible.
Sunrise, Nov. 3, 2007. The 11 robot cars line up in the starting chutes and, one by one, pull onto the course: a makeshift urban landscape built at the former George Air Force Base in Victorville, Calif. The autonomous vehicles – no drivers and no remote controls – were tasked with completing a variety of complex maneuvers, including merging, passing, parking and negotiating intersections, with interacting with both manned and unmanned cars, and even determining right of way.


The DARPA Urban Challenge pitted 11 driverless vehicle finalists against one another in a race to complete a range of maneuvers, including passing, parking and negotiating intersections. Shown are Stanford Racing Team’s and Team VictorTango’s (Virginia Tech’s) cars together at an intersection (top) and Stanford Racing Team’s car approaching a traffic vehicle (above). Photos courtesy of DARPA.


The DARPA Grand Challenge is a prize competition, the primary goal of which is to fund research to develop autonomous vehicle technology that will help to keep soldiers off the battlefield. The first two challenges were held in the Mojave Desert, where course obstacles were mostly rocks and bushes. The 2007 Urban Challenge, however, required much more of the cars.

“This has a component of prediction,” said Michael Montemerlo, a senior research engineer in the Stanford Artificial Intelligence Lab (SAIL) in California. The Stanford Racing Team won the 2005 Challenge and was one of the 11 finalists here. “There are other intelligent robot drivers out in the world. They are all making decisions. Predicting what they are going to do in the future is a hard problem that is important to driving. Is it my turn at the intersection? Do I have time to get across the intersection before somebody hits me?”


Using an array of sensors – including lidar, radar and cameras – Junior 2 can identify and track vehicles, bicycles and pedestrians, and otherwise find its way around town without the aid of a driver. Courtesy of Mike Sokolsky, Stanford Artificial Intelligence Lab.


By midmorning, almost half of the cars had been removed from the course, unable to complete the tasks for one reason or another. One, a crowd favorite from the previous challenge, had started driving erratically and nearly plowed into the one-time commissary building. Six cars crossed the finish line. In first place: Tartan Racing, from Carnegie Mellon University in Pittsburgh. In second: Stanford, with its modified 2006 Volkswagen Passat Wagon named Junior.

Junior achieved its second-place showing thanks to an array of sensors and positioning systems – not to mention the sophisticated software developed at SAIL for perception, mapping and planning, giving it the machine-learning ability needed to improve its driving and maintain a cohesive understanding of what’s going on all around it.

The sensors included two side-facing light-detection and ranging (lidar) units made by Sick AG of Waldkirch, Germany, and a forward-facing Riegl LMS-Q120 lidar made by the Horn, Austria-based Riegl Austria. These allowed the car to determine its position in real time by locating lane markings based on differences in brightness in the ground, and to estimate its position to within 5 cm. Additional lidar units mounted in the front and rear, and on top of the car, aided in object detection and tracking.

Putting Junior in its place

The potential military applications of autonomous driving technology are important, but the developers of this remarkable little station wagon were simultaneously working toward another goal: applying the technology to improve the safety and overall driving experience in consumer vehicles. To this end, Volkswagen worked closely with Stanford in developing both Junior and Stanley, the modified 2004 Volkswagen Touareg that won the 2005 Grand Challenge (and that is on display in the Smithsonian Institution in Washington).

Late last year, Volkswagen and Stanford University’s School of Engineering announced the Volkswagen Automotive Innovation Laboratory (VAIL), the next step in the ongoing relationship between the two organizations. The Volkswagen Group has dedicated $5.75 million to the creation of the lab, including $2 million for building construction and an additional $750,000 per year for the next five years to fund research and teaching activities – encompassing, for example, far-ranging collaborations between Stanford researchers, international visiting scholars, automotive equipment manufacturers and Silicon Valley experts.

At the formal dedication and opening of VAIL this past April, presided over by German Chancellor Angela Merkel, Volkswagen and Stanford demonstrated their latest step forward in vision-assisted autonomous driving: Junior 3, a blue Passat running the prototype Autonomous Valet Parking system.

Videology Industrial-Grade Cameras - NEW 2MP Camera 2024 MR


In April of this year, German Chancellor Angela Merkel presided over the formal dedication and opening of the Volkswagen Automotive Innovation Laboratory (VAIL), the next step in the ongoing relationship between the car company and Stanford University’s School of Engineering. Courtesy of Volkswagen Group of America.


Junior 3 can navigate a parking garage and park itself using mostly stock parts. Both the camera positioned in front of the rear-view mirror and the front radar system are available as package options from Volkswagen. The off-the-shelf lidar units attached to the side of the car cannot be purchased as package options, but the company does offer other, similar side-looking sensors for lane-assist and blind-spot detection. Drivers can recall the car with the push of a button, using an iPhone or other smart phone app.

Bloggers and assorted other observers have pointed out the shortcomings of the system as demonstrated in April: It can park itself only if provided a map of the garage, and it cannot detect pedestrians who might happen to step in front of it. Volkswagen envisions garages dedicated to cars with the systems, thus overcoming these obstacles. Whether or not this comes to pass, Junior 3 demonstrates that, technologically, vision-assisted driverless parking is within reach.

Big Brother takes to the streets

Also in attendance at the formal dedication of VAIL: Junior 2, the most recent version of the car used in the DARPA Urban Challenge. Although Junior 3 is a demonstration of the Autonomous Valet Parking system, this car serves primarily as a test bed for artificial intelligence research in the driving domain, with the specific goal of developing a fully autonomous car capable of driving on urban streets in traffic.

“While we’re not going to see fully hands-off cars on the road anytime soon,” said Mike Sokolsky, a robotics research engineer at SAIL, “almost all of the research we do could be used to increase safety and efficiency in driving, even when a human is primarily in control.” Indeed, a number of options found in consumer cars today were made possible by autonomous driving research: adaptive cruise control and lane position monitoring, for instance.

Junior 2 identifies and tracks vehicles, bicycles and pedestrians all around it, taking advantage of the global view afforded by the many sensors mounted on the car; it never loses focus or forgets to check a blind spot before changing lanes. But the car still isn’t as adept as human drivers at interpreting all of the information it receives about its environment. “At this point, the sensors available to autonomous cars can provide very rich information,” Sokolsky said, “but making the most of it is definitely still a challenge.”

To date, the car has relied largely on lidar – using it to produce a detailed three-dimensional point cloud, its primary source of data. Cameras offer faster frame rates, higher resolution and much lower costs, albeit without any real depth information. Stereo vision shows some promise but has relatively limited sensitivity and resolution. And it will be some time before time-of-flight depth cameras are inexpensive and robust enough to use in an automotive environment.

The Stanford researchers are seeking to address this issue by using “familiar and easy-to-process” data from the lidar units to learn more about the images coming out of the cameras, to establish correlations between the two types of sensors. For example, by extracting 3-D models of moving objects from the point cloud and matching it to what the camera sees in the same location, they can obtain accurate information about distance and shape.

“The hope is to be able to get as much or more information directly out of a camera without the need for depth sensing at all,” Sokolsky said. “By relying primarily on a camera, data systems like this are much easier to integrate into commercial vehicles, as well as taking advantage of the benefits in frame rate and resolution.”



Junior 2: The Technology


The second car in the Junior line uses an array of sophisticated sensors to facilitate its fully autonomous driving. Currently, the sensor suite centers on a Velodyne HDL-64E S2 LIDAR that provides 1.3 million returns per second from 64 beams, with a range of ~100 m. It also includes six Bosch production automotive radar units for tracking vehicles, especially at longer ranges, and two Sick LD planar lidar scanners for near-field detection on the sides and rear of the car. Four Point Grey cameras are used for passive vision: a Ladybug3 spherical vision head mounted just above the Velodyne with six 2-MP cameras to provide a full view around the car; a pair of 2-MP forward-facing cameras for stereo vision; and a 15-Hz 5-MP forward-looking color camera.


Stanford’s car, Junior, uses an array of sensors to negotiate traffic and a variety of obstacles. Shown here is an image from the car's internal view. The white dots are laser returns from the Velodyne lidar, the red bars are obstacles, and the blue arrows and orange lines are lanes. The yellow boxes represent cars Junior is tracking, and the yellow arrows extending from Junior show the planned trajectory into the intersection. Image courtesy of Mike Sokolsky, Stanford Artificial Intelligence Lab.

Published: October 2010
Glossary
frame rate
Frame rate refers to the frequency at which consecutive images, or frames, are displayed in a video sequence. It is typically measured in frames per second (fps) and determines the smoothness and perceived motion of the video. In digital video, each frame consists of a snapshot of the scene at a particular moment in time. When these frames are played in rapid succession, the illusion of motion is created. The frame rate dictates how many frames are displayed per second, thus affecting the...
lidar
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here is how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
tracking
1. The process of following an object's movement; accomplished by focusing a radar beam on the reticle of an optical system on the object and plotting its bearing and distance at specific intervals. 2. In display technology, use of a light pen to move an object across a display screen.
3-D modelsadaptive cruise controlAutonomous Valet Parking systemcamerasCarnegie Mellon UniversityCARSConsumerconsumer vehiclesDARPAdarpa grand challengedefenseDriverless carsFeaturesframe rateGary BoasGeorge Air Force BaseImagingindustrialintelligent robot driversJuniorJunior 2Junior 3lane markingslane position monitoringlidarmachine-learningMichael MontemerloMike SokolskymirrorsMojave Desertobject detectionOpticsreal timeremote controlRiegl AustriaRiegl LMS-Q120robot driversrobotsSAILsenor processingSensors & DetectorsSick AGSilicon ValleySmithsonian InstitutionSoftwareStanford Artificial Intelligence Labstanford racing teamThe Volkswagen Grouptime-of-flight depth cameratrackingUrban ChallengeVAILvehicleVolkswagenVolkswagen Automotive Innovation LaboratoryVolkswagen Passat WagonVolkswagen Touaregautonomous vehicles

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.