Radiometric fingerprinting of object surfaces using mobile laser scanning and semantic 3D road space models
New method links 312.4 million LiDAR beams to 6368 city objects to identify materials.
Researchers from the Technical University of Munich (TU Munich) have published a novel method for extracting material information from repeated mobile laser scans of urban environments. Their paper, 'Radiometric fingerprinting of object surfaces using mobile laser scanning and semantic 3D road space models,' demonstrates how to leverage the growing volume of LiDAR data collected by autonomous vehicles. The core innovation is creating unique 'radiometric fingerprints' for surfaces by grouping millions of individual laser beam reflections from the same semantic object across different conditions—varying distances, angles, sensors, and environmental factors.
Using the Audi Autonomous Driving Dataset (A2D2) vehicle equipped with five LiDAR sensors, the team processed a massive dataset of 312.4 million individual beams collected over four campaigns. They automatically associated this sensor data with 6,368 distinct objects within a highly detailed, semantic 3D city model of four inner-city streets. This model, built to the CityGML 3.0 standard at Level of Detail 3 (LOD3) with centimeter accuracy, provides the necessary semantic framework for fine-grained analysis.
The extracted fingerprints reveal consistent, recurring patterns within object classes (like buildings, roads, or signs), which act as indicators of their dominant material composition—information previously missing from digital city models. To support this workflow, the researchers also developed and released 3DSensorDB, a specialized geodatabase solution. By making the semantic model, method implementations, and database open-source, they provide a foundational toolkit for adding a material intelligence layer to urban digital twins, significantly expanding their analytical potential for simulation, planning, and autonomous system training.
- Processed 312.4 million LiDAR beams from 4 scanning campaigns using the Audi A2D2 vehicle.
- Automatically linked sensor data to 6368 objects in a centimeter-accurate, semantic 3D city model (CityGML 3.0 LOD3).
- Released the open-source 3DSensorDB geodatabase and full methodology to add material data to urban digital twins.
Why It Matters
Adds crucial material property data to city models, enhancing simulations for autonomous driving, urban planning, and digital twin accuracy.