Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Ambarella Develops 4D Imaging Radar Architecture

Ambarella, an edge AI semiconductor company, has revealed a centralized 4D imaging radar architecture that allows both central processing of raw radar data and deep, low-level fusion with other sensor inputs — including cameras, lidars, and ultrasonics. The architecture provides greater environmental perception and safer path planning in AI-based advanced driver assistance systems and L2+ to L5 autonomous driving systems, as well as autonomous robotics, the company said.

The architecture features Ambarella’s Oculii radar technology, which uses AI software algorithms that dynamically adapt radar waveforms to the surrounding environment to provide high angular resolution of 0.5°, an ultradense point cloud of up to tens of thousands of points per frame and a long detection range up to 500+ m. This is achieved with an order of magnitude fewer antenna multiple-input and multiple-output (MIMO) channels, which reduces the data bandwidth and achieves significantly lower power consumption than competing 4D imaging radars.

Edge AI semiconductor company Ambarella has revealed what the company reports is the first centralized 4D imaging radar architecture, which allows both central processing of raw radar data and deep, low-level fusion with other sensor inputs — including cameras, lidars, and ultrasonics. Courtesy of Ambarella.
Data sets of competing 4D imaging radar technologies are too large to transport and process centrally. They generate multiple terabits per second of data per module, while consuming more than 20 W of power per radar module, due to thousands of MIMO antennas used by each module to provide the high angular resolution required for 4D imaging radar. This may be multiplied across the six or more radar modules required to cover a vehicle, making central processing impractical for other radar technologies, which must process radar data across thousands of antennas.

By applying AI software to adapt the radar waveforms generated with existing monolithic microwave integrated circuit (MMIC) devices, and using AI sparsification to create virtual antennas, Oculii technology reduces the antenna array for each processor-less MMIC radar head in the new architecture to 6 transmit × 8 receive. Additionally, Ambarella’s centralized architecture consumes significantly less power, at the maximum duty cycle, and reduces the bandwidth for data transport by 6×, while eliminating the need for pre-filtered, edge processing and its resulting loss in sensor information.

The software-defined centralized architecture also enables dynamic allocation of the CV3’s processing resources, based on real-time conditions both between sensor types and among sensors of the same type. For example, in extreme rainy conditions that diminish long-range camera data, the CV3 can shift some of its resources to improve radar inputs. Likewise, if it is raining while driving on a highway, the CV3 can focus on data coming from front-facing radar sensors to further extend the vehicle’s detection range while providing faster reaction times.

According to Ambarella, the device is the world’s first centralized 4D imaging radar architecture.

 



Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media