The FABRIC Strategy for Verifying Neural Feedback Systems
New verification technique combines forward and backward reachability to prove safety of neural network-controlled systems.
A team from Stanford University led by I. Samuel Akinwande, Sydney M. Katz, Mykel J. Kochenderfer, and Clark Barrett has introduced FABRIC (Forward and Backward Reachability Integration for Certification), a novel algorithm for verifying the safety of neural feedback systems. These systems combine neural network controllers with physical dynamics, like those in autonomous vehicles or robotics, where proving they won't cause unsafe states is critical. The research addresses a key gap: while forward reachability (predicting future states from current ones) is well-studied, backward reachability (determining which initial states could lead to unsafe outcomes) has been limited by scalability. FABRIC introduces new algorithms for computing both over- and under-approximations of backward reachable sets for nonlinear systems.
By integrating these new backward analysis techniques with established forward methods, FABRIC creates a more comprehensive verification framework. The algorithm was evaluated on a representative set of benchmarks and demonstrated significant performance improvements over prior state-of-the-art methods. This dual-directional analysis allows for stronger safety certifications, proving that a system will both avoid dangerous regions (via forward analysis) and that safe operation is guaranteed from a broader set of starting conditions (via backward analysis). The work, detailed in the arXiv preprint 2603.08964, represents a meaningful advance in making complex AI-controlled systems more trustworthy and deployable in safety-critical domains.
- Introduces scalable backward reachability analysis for neural feedback systems, a previously underdeveloped area.
- Integrates forward and backward methods into the FABRIC algorithm for more complete safety verification.
- Demonstrates significant performance improvements over prior state-of-the-art techniques on benchmark tests.
Why It Matters
Enables stronger safety proofs for AI in autonomous vehicles and robotics, accelerating deployment of trustworthy autonomous systems.