Quantitative Verification of Finite-Time Constrained Occupation Measures for Continuous-time Stochastic Systems
Researchers introduce a mathematical method to prove robots will spend enough time in target zones without violating safety rules.
Researchers Bai Xue and C.-H. Luke Ong have introduced a novel mathematical framework for verifying the safety and performance of autonomous systems governed by stochastic differential equations (SDEs). Published on arXiv, their work addresses a critical gap in classical verification, which typically focuses on whether a system reaches a target. Instead, their method quantifies the probability that a system, like a drone or a chemical reactor, accumulates a required amount of time within a specific operational zone while never violating predefined safety boundaries. This is essential for real-world tasks such as persistent surveillance, where a drone must loiter over an area, or wireless charging, where a device must remain in a charging field for a sufficient duration.
The core innovation is a barrier-certificate framework that derives three distinct classes of mathematical certificates: one for computing upper bounds and two for computing lower bounds on the probability of satisfying these 'cumulative specifications.' By modeling a 'stopped process' that freezes the system's state if it hits a safety boundary, the method provides rigorous guarantees. The approaches are validated through numerical examples implemented using semidefinite programming, a type of convex optimization, making the theoretical proofs computationally tractable. This work fundamentally shifts verification from analyzing one-time events to ensuring sustained, safe operation over time, providing a more robust foundation for deploying autonomous systems in uncertain environments.
- Shifts focus from single-event reachability to verifying cumulative time spent in target regions for tasks like surveillance and mixing.
- Introduces three classes of barrier certificates to compute rigorous upper and lower probability bounds for system safety and performance.
- Uses a 'stopped process' model and semidefinite programming for practical, computationally validated implementation.
Why It Matters
Provides a mathematical foundation for proving complex autonomous systems are both effective and safe over extended operations, crucial for real-world deployment.