Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Learning Model Identifies Road Features from Unprocessed Point Cloud Data

Facebook Twitter LinkedIn Email
A Japanese research group has developed a way to put vast stores of unprocessed point cloud data, collected for Japan’s public works, to practical use. The group created a point cloud date-based deep learning model for road feature identification that could improve road maintenance and urban management and increase the accuracy of virtual road maps.

The researchers developed a deep learning algorithm that uses high-definition 3D maps to automatically identify and extract road features from the point cloud data. The extracted road features are used to generate training data, and the data is used to construct a deep learning model for road feature identification.

Professor Ryuichi Imai of Hosei University collaborated with researchers at Osaka University of Economics, Setsunan University, Dynamic Map Platform Co. Ltd., and Hosei University to develop the algorithm, which automates the process of generating training data, and to construct the road feature identification model.

The researchers first separated the ground surface from the point cloud data using CloudCompare, a 3D point cloud processing software. Next, they generated area data from the high-definition map and extracted component points of road features. These points were assigned as either road signs or traffic lights. The researchers provided other labels for the remaining data.

To generate the training data, the researchers extended the area data corresponding to the component points and further generated the point cloud projection images.

Using the training data, the researchers constructed the identification model using an object-detection algorithm. The model can detect road features based on clustering points, in addition to those identified for the ground surface using CloudCompare.

Point cloud data is of limited use in an unprocessed, unstructured state. It can be structured by automatically extracting a feature using a plan of completion drawing that shows the completed geometry of a construction object. “Currently, people need to visually check the point cloud data to identify road features, as computers cannot recognize them,” Imai said. “But with our proposed method, the feature extraction can be done automatically, including the features at undeveloped road map sections.”

Researchers constructed a deep learning model for extracting road features in Japan from point cloud data using HD maps. Courtesy of Imai et al. (2022) I SCIS&ISIS 2022.
Researchers constructed a deep learning model for extracting road features in Japan from point cloud data using high-definition maps. Courtesy of Imai et al. I SCIS&ISIS 2022.
A previous approach proposed by the researchers also used high-definition 3D map data to extract road features, but it was limited to the developed sections of road maps.

The researchers tested the algorithm on a road with 65 road signs, 46 traffic lights, and noise features over a distance of 1.5 km. They used 258 road signs and 168 traffic lights to train the identification model, and 36 and 24 images, respectively, to calculate the algorithm determination accuracy.

The researchers found that the precision, recall, and F-measure were 0.84, 0.75, and 0.79, respectively, for the road signs, and 1.00, 0.75, and 0.86, respectively, for the traffic lights, indicating zero false determinations. The precision of the proposed model was shown to be higher than existing models.

“A product model constructed from point cloud data will enable the realization of a digital twin environment for urban space with regularly updated road maps,” Imai said. “It will be indispensable for managing and reducing traffic restrictions and road closures during road inspections. The technology is expected to reduce time costs for people using roads, cities, and other infrastructures in their daily lives.”

The research was presented at the Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems (
Jan 2023
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
A precisely defined series of steps that describes how a computer performs a task.
machine visionimagingAsia PacificHosei Universityimage acquisitionimage dataalgorithmdeep learningpoint cloudpoint cloudspoint cloud dataraw imagesurban design3D point cloudsimage recognitionintelligent systems

Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.