A Neuro-Symbolic Framework Combining Inductive and Deductive Reasoning for Autonomous Driving Planning
A new framework uses LLMs and logic solvers to make self-driving cars safer and more transparent.
Researchers Hongyan Wei and Wael AbdAlmageed have introduced a groundbreaking neuro-symbolic framework designed to solve the 'black-box' problem in autonomous driving. Current end-to-end models rely purely on data-driven, inductive reasoning, which lacks transparency and safety guarantees in rare, complex scenarios. Their novel approach seamlessly integrates rigorous deductive reasoning by using a Large Language Model (LLM) to dynamically interpret a scene and extract logical rules. These rules are then processed by a deterministic Answer Set Programming (ASP) solver to generate safe, discrete driving decisions that are fully traceable and interpretable.
To translate these high-level logical decisions into actual vehicle control, the team developed a decision-conditioned decoding mechanism. This transforms symbolic commands into learnable embedding vectors that simultaneously constrain the planning query and the physical initial velocity for a differentiable Kinematic Bicycle Model (KBM). The KBM generates a kinematically feasible baseline trajectory, which is then refined by a neural network for residual corrections. This hybrid architecture guarantees both physical realism and high performance.
The results on the challenging nuScenes benchmark are compelling. The framework comprehensively outperforms the state-of-the-art model MomAD, achieving a 0.57-meter L2 mean error (a key measure of trajectory accuracy), slashing the collision rate to an exceptionally low 0.075%, and optimizing trajectory prediction consistency to 0.47 meters. This work represents a significant step toward autonomous systems that are not only more capable but also safer and auditable by design.
- Integrates LLMs for scene understanding and ASP solvers for logical arbitration, creating a transparent 'white-box' planning system.
- Reduces collision rate to 0.075% and L2 error to 0.57m on nuScenes, outperforming the prior SOTA model MomAD.
- Uses a Kinematic Bicycle Model to ensure physical feasibility, bridging the gap between discrete logic and continuous vehicle control.
Why It Matters
This moves autonomous driving from opaque, data-only models to systems with verifiable safety and explainable decision-making, critical for public trust and regulatory approval.