SwarmDrive: Semantic V2V Coordination for Latency-Constrained Cooperative Autonomous Driving
Cars share intent via local SLMs, slashing cloud latency by 70%
Autonomous driving faces a latency paradox: cloud-hosted LLMs add round-trip delays and rely on stable connectivity, while purely local edge models struggle with occlusion. SwarmDrive, a new semantic V2V coordination framework from a team led by Anjie Qiu, solves this by having nearby vehicles run local Small Language Models (SLMs) that share compact intent distributions only when uncertainty is high. This event-triggered consensus mechanism fuses decisions without flooding the network.
In a 5-seed executable study around one occluded intersection, SwarmDrive under its 6G communication setting raised success from 68.9% to 94.1% over a single local SLM, while reducing latency from a 510ms cloud reference to just 151.4ms. The researchers also evaluated swarm size, packet loss, and entropy thresholds, finding optimal balance at 4 vehicles and a 0.65 entropy trigger. While not a deployment-grade validation of a real 6G stack, the results demonstrate that semantic edge cooperation can work under tight latency constraints.
- SwarmDrive uses local SLMs on vehicles to share intent distributions only when uncertainty is high, reducing communication overhead
- Achieved 94.1% success rate vs 68.9% for a single local SLM, with latency 70% lower than cloud-based inference
- Optimal performance with 4 vehicles and an entropy threshold of 0.65; larger swarms increased packet loss
Why It Matters
SwarmDrive shows edge AI cooperation can beat cloud latency for safer autonomous driving in occluded scenarios.