Search
Menu
Photonics Handbook

Learning Model Identifies Road Features from Unprocessed Point Cloud Data

Facebook X LinkedIn Email
TOKYO, Jan. 11, 2023 — A Japanese research group has developed a way to put vast stores of unprocessed point cloud data, collected for Japan’s public works, to practical use. The group created a point cloud date-based deep learning model for road feature identification that could improve road maintenance and urban management and increase the accuracy of virtual road maps.

The researchers developed a deep learning algorithm that uses high-definition 3D maps to automatically identify and extract road features from the point cloud data. The extracted road features are used to generate training data, and the data is used to construct a deep learning model for road feature identification.

Professor Ryuichi Imai of Hosei University collaborated with researchers at Osaka University of Economics, Setsunan University, Dynamic Map Platform Co. Ltd., and Hosei University to develop the algorithm, which automates the process of generating training data, and to construct the road feature identification model.

The researchers first separated the ground surface from the point cloud data using CloudCompare, a 3D point cloud processing software. Next, they generated area data from the high-definition map and extracted component points of road features. These points were assigned as either road signs or traffic lights. The researchers provided other labels for the remaining data.

To generate the training data, the researchers extended the area data corresponding to the component points and further generated the point cloud projection images.

Using the training data, the researchers constructed the identification model using an object-detection algorithm. The model can detect road features based on clustering points, in addition to those identified for the ground surface using CloudCompare.

Point cloud data is of limited use in an unprocessed, unstructured state. It can be structured by automatically extracting a feature using a plan of completion drawing that shows the completed geometry of a construction object. “Currently, people need to visually check the point cloud data to identify road features, as computers cannot recognize them,” Imai said. “But with our proposed method, the feature extraction can be done automatically, including the features at undeveloped road map sections.”

Researchers constructed a deep learning model for extracting road features in Japan from point cloud data using HD maps. Courtesy of Imai et al. (2022) I SCIS&ISIS 2022.
Researchers constructed a deep learning model for extracting road features in Japan from point cloud data using high-definition maps. Courtesy of Imai et al. I SCIS&ISIS 2022.
A previous approach proposed by the researchers also used high-definition 3D map data to extract road features, but it was limited to the developed sections of road maps.

The researchers tested the algorithm on a road with 65 road signs, 46 traffic lights, and noise features over a distance of 1.5 km. They used 258 road signs and 168 traffic lights to train the identification model, and 36 and 24 images, respectively, to calculate the algorithm determination accuracy.

The researchers found that the precision, recall, and F-measure were 0.84, 0.75, and 0.79, respectively, for the road signs, and 1.00, 0.75, and 0.86, respectively, for the traffic lights, indicating zero false determinations. The precision of the proposed model was shown to be higher than existing models.

“A product model constructed from point cloud data will enable the realization of a digital twin environment for urban space with regularly updated road maps,” Imai said. “It will be indispensable for managing and reducing traffic restrictions and road closures during road inspections. The technology is expected to reduce time costs for people using roads, cities, and other infrastructures in their daily lives.”

The research was presented at the Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems (www.j-soft.org/2022).

Published: January 2023
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
algorithm
A precisely defined series of steps that describes how a computer performs a task.
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
point cloud
A point cloud is a set of data points in a three-dimensional coordinate system, where each point represents a specific location in space. These points are typically obtained through various sensing techniques such as lidar (light detection and ranging), photogrammetry, structured light scanning, or 3D scanning. Each point in a point cloud is defined by its spatial coordinates (x, y, z), representing its position in three-dimensional space, as well as potentially additional attributes such as...
machine visionImagingAsia PacificHosei Universityimage acquisitionimage dataalgorithmdeep learningpoint cloudpoint cloudspoint cloud dataraw imagesurban design3D point cloudsimage recognitionintelligent systems

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.