The Era of End-to-End Autonomy: Transitioning from Rule-Based Driving to Large Driving Models
New research analyzes Tesla FSD V12, NVIDIA Cosmos, and the rise of 'Large Driving Models' that learn from data, not rules.
A new research paper by Eduardo Nebot and Julie Stephany Berrio Perez, published on arXiv, provides a comprehensive analysis of the seismic shift occurring in autonomous driving. The industry is moving decisively away from traditional, modular 'sense-plan-act' architectures, which rely on hand-coded rules, toward end-to-end (E2E) neural networks known as Large Driving Models (LDMs). These LDMs, inspired by large language models, process raw sensor data (like camera feeds) and output driving commands directly, learning complex behaviors from vast datasets rather than explicit programming.
The paper cites real-world evidence from deployed systems, including Tesla's Full Self-Driving (FSD) V12 and V14, Rivian's Unified Intelligence platform, and NVIDIA's Cosmos. It highlights that E2E systems demonstrate superior ability to navigate the 'long tail' of rare but critical driving scenarios. A key commercial outcome is the rise of 'supervised E2E driving' or 'L2++' systems, where the AI performs most driving tasks but requires human oversight. Major automakers are targeting 2026 for widespread deployment of this technology.
Furthermore, the authors argue this architectural revolution extends beyond cars. The principles of E2E learning and embodied AI are directly applicable to other complex robotics domains, most notably humanoid robots. The transition signifies a broader move in robotics from brittle, programmed intelligence to flexible, learned intelligence, with profound implications for how intelligent machines are built and deployed across industries.
- The industry is shifting from rule-based modular systems to end-to-end 'Large Driving Models' (LDMs) that learn from data.
- Real-world systems like Tesla FSD V12/V14 and NVIDIA Cosmos show E2E AI handles complex driving scenarios better than traditional code.
- Supervised 'L2++' systems, where AI drives with human oversight, are slated for major manufacturer deployment starting in 2026.
Why It Matters
This shift defines the next generation of autonomous vehicles and robotics, moving from programmed logic to learned intelligence for safer, more capable systems.