Point Cloud Feature Coding for Object Detection over an Error-Prone Cloud-Edge Collaborative System
Researchers achieve 93% object detection accuracy while reducing feature size 172-fold over noisy networks.
A research team from institutions including City University of Hong Kong and University of Leicester has published a breakthrough paper on arXiv titled 'Point Cloud Feature Coding for Object Detection over an Error-Prone Cloud-Edge Collaborative System.' The work addresses a critical bottleneck in distributed AI systems: efficiently transmitting 3D perception data between resource-constrained edge devices (like vehicle sensors) and powerful cloud servers. Their framework combines task-driven compression with adaptive error correction to enable reliable object detection even over noisy wireless channels where traditional methods fail.
The technical approach uses a lightweight feature compaction module that identifies and removes task-irrelevant regions from multi-scale point cloud representations, then applies channel-wise dimensionality reduction to preserve only essential information. For transmission, it employs SNR-adaptive channel coding for attribute data and Low-Density Parity-Check (LDPC) encoding for geometric information. On the receiving cloud side, a diffusion-based feature upsampling module reconstructs the complete multi-scale features. When tested on the KITTI autonomous driving dataset, the system achieved remarkable 3D average precision scores of 93.17%, 86.96%, and 77.25% for easy, moderate, and hard objects respectively—all while compressing features 172-fold and operating over challenging 0 dB SNR wireless conditions. The team will release their source code on GitHub, potentially accelerating deployment of efficient edge-cloud perception systems.
- Achieves 172x compression of 3D point cloud features while maintaining 93.17% detection accuracy on KITTI dataset
- Uses SNR-adaptive channel coding and LDPC encoding for reliable transmission over 0 dB SNR wireless channels
- Lightweight edge processing enables real-time object detection for autonomous vehicles with limited bandwidth
Why It Matters
Enables real-time 3D perception for autonomous vehicles and robots despite limited bandwidth and unreliable network conditions.