DGHMesh: A Large-scale Dual-radar mmWave Dataset and Generalization-focused Benchmark for Human Mesh Reconstruction
15 subjects, 8 actions, 360K frames from dual radar for human mesh reconstruction...
Researchers have introduced DGHMesh, a large-scale dual-radar millimeter-wave (mmWave) dataset designed to advance human mesh reconstruction (HMR) research. The dataset captures data from 15 subjects performing 8 distinct actions, yielding 360,000 synchronized frames from FMCW radar, SFCW radar, RGB cameras, and high-precision 3D HMR annotations. It includes raw I/Q data from both radar modalities and accurately calibrated spatial positions, enabling robust evaluation under diverse measurement configurations such as human position shifts, orientation shifts, subarray size variations, and cross-subject settings.
To leverage this dataset, the team also developed mmPTM, a query-based multi-radar fusion framework that jointly processes point clouds and imaging tubes for HMR. Extensive experiments against representative baselines show mmPTM consistently achieves outstanding accuracy and competitive generalization across multiple sub-benchmarks. This work addresses a critical gap in mmWave-based HMR by providing a fair comparison benchmark and a validated fusion approach, with the dataset and code publicly available to the research community.
- Dataset includes 360,000 synchronized frames from 15 subjects performing 8 actions using dual-radar (FMCW and SFCW) mmWave technology
- Benchmark tests generalization under 4 configuration shifts: position, orientation, subarray size, and cross-subject settings
- Proposed mmPTM framework uses query-based multi-radar fusion with point clouds and imaging tubes for accurate 3D human mesh reconstruction
Why It Matters
Enables privacy-preserving, contactless 3D body tracking with radar, advancing applications in healthcare, gaming, and autonomous systems.