Search
Menu
Stanley Electric Co. Ltd. - IR Light Sources 4/24 LB
Photonics HandbookFeatures

How Far Down the Road Is the Autonomous Vehicle?

Facebook X LinkedIn Email
For decades, truly autonomous vehicles have drawn ever closer to broad market acceptance. Emerging advancements in lidar technology may soon help them cross that last mile.

GREG SMOLKA, INSIGHT LIDAR

First conceived by editors of Scientific American magazine, the dream of self-driving cars — known today as autonomous vehicles (AVs) — recently passed its 100-year anniversary. With the phenomenal advancements in technology during the interim, it is surprising that AVs are not yet a common sight on the road.

Philip Koopman, associate professor at Carnegie Mellon University, an expert in AV safety and co-founder of Edge Case Research, famously said that autonomous driving has been 98% solved for over 20 years.

Ultrahigh-resolution lidar quickly captures point cloud data with 40 pixels of resolution at distances of 200 m to enable autonomous vehicles to react more quickly and appropriately to obstacles and hazards within view. Courtesy of Insight LiDAR.


Ultrahigh-resolution lidar quickly captures point cloud data with 40 pixels of resolution at distances of 200 m to enable autonomous vehicles to react more quickly and appropriately to obstacles and hazards within view. Courtesy of Insight LiDAR.

Designers have learned how difficult it is to resolve those last couple of percentage points before any design meets the target specifications for a project. AV technology is truly in the “last mile” of development, but like for many design projects, that mile can take the majority of the development time. In the case of autonomous vehicles, the last mile involves the challenges of capturing sufficient quality data, enabling the AV system to make good decisions, and resolving the corner cases — the problems or situations that occur outside of normal operating parameters.

Thus, while consumers remain enthusiastic about the prospects of greater safety on the road, mass deployment of autonomous vehicles remains elusive. Despite the progress that has been made in artificial intelligence (AI) and the underlying perception algorithms, fully autonomous personal vehicles are still predicted to be a good 10 years out. According to a recent New York Times article, both carmakers and technology companies have concluded that making autonomous vehicles is going to be harder, slower, and more costly than originally thought.

This doesn’t mean that companies have given up on the dream. Several small-scale deployments and test cases are on the road today. For example:
  • Waymo is offering rides without human safety drivers in Phoenix.

  • Cruise is testing AVs without human safety drivers in San Francisco.

  • An autonomous truck from Kodiak Robotics recently covered 800 miles without a single disengagement — that is, the safety driver did not once have to take control of the truck’s AV system.

  • Nuro announced a pilot with pharmacy/retailer CVS to autonomously deliver prescriptions in Houston.
The Society of Automotive Engineers defines six levels of vehicle automation that range from Level 0 (no automation) to Level 5 (fully autonomous). Level 3 is the threshold marking where an AV system shifts from merely assisting the driver to taking over control of the wheel — albeit within limited conditions. Conditions for autonomous operation progressively decline at Levels 4 and 5.

So, what is required for the completion of the development work for and achievement of full Level 4 and 5 autonomous driving? The detection technology must continue to evolve to meet greater prerequisite range and resolutions. Delays in system response times must be reduced to ensure that an autonomous vehicle can respond to something in its path when traveling at highway speeds. The AI software must integrate more data from real-world driving experience to identify and respond to the corner cases. And lastly, the cost of AV systems must decrease.

Sensor range and resolution

The lidar systems that underlie AV operation consist of a laser and photodetector sensors that receive reflections of the laser light from distant objects to create three-dimensional images. The challenge for AV designers is that they need images captured at distances of 200 m — slightly more than two football fields — with sufficient resolution and capture rates to enable a lidar system to achieve near-instant identification.

The Society of Automotive Engineers defines six levels of vehicle automation that range from Level 0 (no automation) to Level 5 (fully autonomous). Level 3 is the threshold that marks where an autonomous vehicle (AV) system shifts from merely assisting the driver to taking over control of the wheel — albeit within limited conditions. Conditions for autonomous operation progressively decline at Levels 4 and 5.


The Society of Automotive Engineers defines six levels of vehicle automation that range from Level 0 (no automation) to Level 5 (fully autonomous). Level 3 is the threshold that marks where an autonomous vehicle (AV) system shifts from merely assisting the driver to taking over control of the wheel — albeit within limited conditions. Conditions for autonomous operation progressively decline at Levels 4 and 5.

This benchmark is based on the fact that a vehicle’s stopping distance at 60 mph is around 100 m (328 ft) and can be double that on a wet road. Further, at a speed of 65 mph, a vehicle travels 200 m in just seven seconds. This means seven seconds is all the time an autonomous system has to detect and identify an object — such as a small child in a dark, low-reflection coat — and decide whether and how to react.

Experts in perception indicate that at least 20 to 40 sensor pixels are required to properly classify an object on the road. Contrast this with the most advanced technology implemented today, which allows resolution of only 10 to 12 pixels on an object at 200 m. This is roughly half of what an AV system requires to identify an obstacle in the road.

A velocity map of a pedestrian turning toward the street. Blue indicates the portion of the body facing away from the sensor. Red indicates the portion of the body facing toward the sensor. Measurement is sensitive enough to predict gestures and movement, adding a layer of safety. Courtesy of Insight LiDAR.


A velocity map of a pedestrian turning toward the street. Blue indicates the portion of the body facing away from the sensor. Red indicates the portion of the body facing toward the sensor. Measurement is sensitive enough to predict gestures and movement, adding a layer of safety. Courtesy of Insight LiDAR.

Fortunately, new developments are emerging. Current lidar systems emit short pulses of laser light and measure the time it takes for the pulses to reach an object and bounce back to the detector. This is known as the time-of-flight (TOF) technique. While TOF systems have yet to achieve detection benchmarks for distance and resolution, a newer alternative that employs frequency-modulated continuous wave (FMCW) detection substantially improves system sensitivity, enabling detection of low-reflectivity objects at greater distances.

Unlike TOF, which sends a short pulse of light out and looks for its return, the laser in an FMCW system transmits a continuous sweep of wavelengths of light. As in TOF, the goal is to look for the return signal. But with FMCW detection, the sensor retains a portion of the transmitted signal and combines it with the returned light to produce a multiplicative effect that boosts the overall signal. This approach provides much greater sensitivity, which is critical to enabling lidar sensors to “see” farther.

In one of many small-scale deployments of AV technology, Waymo is offering rides without safety drivers in Phoenix. Courtesy of iStock.com/rvolkan.


In one of many small-scale deployments of AV technology, Waymo is offering rides without safety drivers in Phoenix. Courtesy of iStock.com/rvolkan.

In effect, by emitting a continuous wave of light instead of laser pulses, FMCW lidar measures the frequency shift of the returned signal. This approach not only detects the range to a target, but also any evident Doppler shift, which can determine the relative velocity of an object. In addition to higher sensitivity, FMCW-based systems also typically offer better immunity to stray light compared to TOF-based lidar sensors, as well as the ability to detect velocity directly.

TOF employs a simple, relatively inexpensive pulsed laser diode, while FMCW demands far more precise control over its tunable continuous-wave laser source to sweep it through its wavelength chirps linearly, rapidly, and with high repeatability. Another challenge for lidar designers is developing laser sources that offer sufficient coherence length to produce a beam that retains coherence long enough to reach the farthest target required and return to the sensor without breaking up.

QPC Lasers Inc. - QPC Lasers is LIDAR 4-24 MR

Over the years, applications besides lidar have shifted from time-based measurement to frequency-based measurement due to its inherently better performance. For example, radar was originally a time-based technology, but is now mainly based on FMCW detection. High-speed optical communications have also shifted from time-based to frequency-based measurement, as have certain medical imaging applications such as optical coherence tomography. While time-based measurement is often easier to start with, applications typically shift to frequency-based measurement as soon as the technology permits.

The FMCW technique is leveraged today by systems developers such as Insight LiDAR to achieve the ultrahigh resolution required to obtain 40 pixels of resolution — even for a low-reflectivity object — at the benchmark distance of 200 m. This allows lidar systems to acquire more data at a faster rate and enables faster, more appropriate responses to obstacles and hazards for a greater degree of safety.

System response time

Lidar system latency determines the amount of time between detection of an object and the vehicle’s response. The lower the system latency is, the better. And components, such as the object measurement function, can have either a positive or negative impact. FMCW detection technology can help to minimize latency, especially with the addition of its direct velocity measurement.

Another component contributing to system latency is the AV system software, which analyzes lidar and other sensor data to perform the decision-making step.

Autonomous vehicle and robotics system designers often describe their challenge as “perceive, predict, and plan.” Lidar sensing technology provides the critical information for perception. With the right mix of long-distance, high-resolution object detection and velocity measurement, advanced lidar hardware can further provide a level of information quality that allows AV software to accurately perceive and predict object movements, especially the movements of pedestrians, bicyclists, and other mobility platforms.

When manually driving in areas where pedestrians or bicyclists are active, a driver constantly scans the environment to look for clues. If a person is on the curb and begins to look both ways, the driver can predict that the pedestrian is about to cross the street. If a bicyclist looks over his or her shoulder or begins to turn the bike, the driver understands the bicyclist’s intent to move in a new direction before it actually happens.

These situations present a challenge to traditional lidar systems. While these systems can sense that a person or bicyclist is in view, it is more challenging for them to predict the person’s next action before it begins. If the lidar hardware can detect movements as small as the turning of a person’s head or shoulders, then the software can quickly anticipate that person’s next action and then plan an appropriate action for the vehicle to take. In short, the ability to detect finer details, such as human gestures, takes much of the computational burden off the system software, enabling faster, more accurate decisions.

More and better data

AV software developers spend much of their time defining corner cases — a major barrier to full autonomy — and determining how to resolve them. Corner cases come up infrequently, but autonomous vehicles must be able to understand them, make appropriate predictions, and plan a proper action. Examples of corner cases that a lidar system must be able navigate include:
  • A left turn with oncoming traffic.

  • Unusual road obstacles.

  • Extreme weather conditions.

  • A person emerging from a crowded sidewalk.

  • Unusual vehicles.

  • Animals in the road.
Many corner cases can be seen during test drives and in simulations, but simulations are limited by user experience. Autonomous vehicles can drive an enormous number of miles and still never encounter certain corner cases. As with any “long tail” problem in AI, more and better data is needed to help solve this issue. The ultrahigh-resolution detection and direct velocity measurement of FMCW technology is instrumental in providing a level of richness to lidar data that will allow AV software to more quickly recognize these scenarios and take appropriate action.

Driving costs

Cost will be the significant driver for Level 4 and 5 adoption, especially for personal vehicles. Initial test AVs featured expensive sensor suites, and early lidar systems cost $75,000 or more. While prices have dropped, the lidar used in today’s autonomous vehicles is still expensive, with price tags ranging from $4000 to $8000 per unit. With multiple lidar units needed per vehicle, this price range is still too expensive for widespread deployment. Broad adoption will require lidar sensors that cost less than $500 per sensor, and ideally half that cost. A number of analysts and automobile manufacturers have put the target cost for 360° lidar coverage at $1000 per vehicle. Typically, at least four lidar systems are required to get this coverage, and most lidar systems are still expected to cost more than $1300 through 2025.

While increased production volume can certainly help to drive costs down, system architecture will also have a huge impact on the price of lidar systems. To meet industry cost goals, lidar systems will need to achieve three key factors:

Fewer parts: This drives down system complexity and cost, and also simplifies manufacturing and logistics costs.

While human drivers have some ability to predict future risk, such as a person breaking from a crowd (top), an AV system requires ultrahigh-resolution lidar and direct velocity measurement to pick up similar visual cues. A velocity map (bottom) of the same pedestrian (orange) identifies him as moving toward the sensor. Courtesy of Insight LiDAR.
While human drivers have some ability to predict future risk, such as a person breaking from a crowd (top), an AV system requires ultrahigh-resolution lidar and direct velocity measurement to pick up similar visual cues. A velocity map (bottom) of the same pedestrian (orange) identifies him as moving toward the sensor. Courtesy of Insight LiDAR.


While human drivers have some ability to predict future risk, such as a person breaking from a crowd (top), an AV system requires ultrahigh-resolution lidar and direct velocity measurement to pick up similar visual cues. A velocity map (bottom) of the same pedestrian (orange) identifies him as moving toward the sensor. Courtesy of Insight LiDAR.

Semiconductor architecture: Lidar-on-a-chip systems put all key optical components into photonic integrated circuits and all electronics on ASIC chips to take advantage of the cost and reliability of chip-scale architectures. FMCW lidar is especially amenable to these architectures because it utilizes lower-power lasers, which enables full integration on a chip.

Wafer-level testing: To meet aggressive industry cost targets, component- and system-level tests must represent a very small portion of the system cost. This can be accomplished by designing semiconductor components and assemblies to be tested on-wafer, ensuring that only known good dies make it to final system assembly and test.

Reason for optimism

There is great progress in the development of lidar sensing systems. New, ultrahigh-resolution FMCW lidar is demonstrating object classification and even gesture recognition at greater distances than before, which is critical for perception and prediction at high vehicle speeds. Combined with direct velocity measurement, these capabilities are enabling perception teams to tackle very difficult corner cases to enhance safety. Lidar-on-a-chip architectures also continue to advance, promising true semiconductor cost savings as volumes ramp up.

Further, the market need is greater than ever, with new use cases and AV programs accelerating at several automakers. There is still work to be done. But the dream of ubiquitous, safe autonomous vehicles is closer than ever.

Meet the author

Greg Smolka is vice president of business development at Insight LiDAR, where he focuses on building partnerships in the autonomous vehicle market. He has previously guided photonics applications in biomedical imaging, spectroscopy, semiconductors, telecommunications, and defense.


Published: March 2021
Glossary
lidar
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here is how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
Featureslidardiode lasersSensors & Detectors3D imagingautomotive

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.