Towards Intelligent Computation Offloading in Dynamic Vehicular Networks: A Scalable Multilayer Pipeline
A modified PSO algorithm cuts response time by 26ms across 10 servers...
A team of researchers from the University of Stuttgart, including Falk Dettinger, Matthias Weiß, Baran Can Gül, Sruthi Mangala Suresh, Nasser Jazdi, and Michael Weyrich, has published a paper on arXiv titled 'Towards Intelligent Computation Offloading in Dynamic Vehicular Networks: A Scalable Multilayer Pipeline.' The paper addresses a critical challenge in Software Defined Vehicles (SDVs): the growing computational gap between advanced algorithms and static onboard hardware that remains unchanged throughout a vehicle's 10+ year lifespan. This gap threatens safety-critical functions like advanced driver-assistance systems (ADAS) and real-time perception tasks.
The proposed solution is a novel four-layer computation offloading pipeline that dynamically distributes vehicular functions to cloud and edge resources while adhering to strict Round Trip Time (RTT) constraints. The key innovation is an enhanced Particle Swarm Optimization (PSO) algorithm that integrates distance- and direction-based penalties with functional requirements to optimize edge server selection for mobile vehicles. Tested on a Kubernetes-based cloud infrastructure with realistic vehicular mobility patterns, the pipeline reduces average response time compared to conventional Brute-Force methods while maintaining success rates for latency-critical tasks. The modified PSO achieves an average execution time of 26 ms across ten servers and tasks on CPU, and 550 ms across 15 servers with 1000 tasks on GPU. These results confirm the pipeline's effectiveness in bridging the computational gap for next-generation SDVs, offering a scalable solution to keep vehicles performant over their long lifecycle.
- The four-layer pipeline dynamically offloads compute to cloud/edge resources, meeting strict Round Trip Time constraints.
- Enhanced PSO algorithm integrates distance- and direction-based penalties, achieving 26ms CPU execution for 10 servers/tasks.
- GPU performance reaches 550ms for 15 servers with 1000 tasks, reducing response time vs. Brute-Force while maintaining latency-critical task success rates.
Why It Matters
Ensures software-defined vehicles stay safe and performant over 10+ years despite static hardware.