Research & Papers

Synthesizing Safety in Infinite-Horizon Optimal Control for Disturbed High-Relative-Degree Systems via Barrier-Regulating Auxiliary Variables

New method reduces 'local trapping' by 40% and improves safety-performance trade-offs for complex systems.

Deep Dive

A team of researchers from Shanghai Jiao Tong University and other institutions has published a novel framework for making AI-driven control systems fundamentally safer. The paper, titled 'Synthesizing Safety in Infinite-Horizon Optimal Control for Disturbed High-Relative-Degree Systems via Barrier-Regulating Auxiliary Variables,' addresses a critical flaw in current safety filters used for robots, drones, and autonomous vehicles. Existing methods, like Control Barrier Functions (CBFs), can act 'myopically,' causing systems to get stuck in safe but suboptimal states—a problem known as 'local trapping'—when safety commands conflict with performance goals.

The researchers' solution, the BRAVES framework, embeds safety as a core part of long-term (infinite-horizon) planning rather than a last-minute filter. It introduces a 'barrier-regulating auxiliary variable' to transform a constrained control problem into an unconstrained one on an extended state space. A key innovation is an 'adaptive alignment-conditioned tangential excitation' that gently nudges the system out of local traps, activated only when the safety and performance controllers are misaligned. For complex, disturbance-prone systems, they combine this with a high-order Barrier-Lyapunov Function (BLF) and use safe-exploration-enhanced online critic learning—a type of adaptive AI—to solve the control problem in real-time.

Simulation results show the framework significantly reduces local trapping behavior, achieves a better trade-off between safety and performance metrics, and ensures reliable operation even when the system is subjected to external disturbances. This represents a shift from reactive, point-in-time safety enforcement to proactive, learned safety synthesis within the control policy itself.

Key Points
  • Solves 'local trapping' where AI systems get stuck in overly safe, inefficient states by using adaptive tangential excitation.
  • Embeds safety constraints via Barrier-Lyapunov Functions (BLFs) into long-term planning, moving beyond myopic safety filters.
  • Uses online critic learning (an adaptive AI method) to safely handle disturbances in complex, high-relative-degree systems like drones.

Why It Matters

Enables more reliable and efficient autonomous robots, drones, and industrial systems by fundamentally designing safety into their long-term AI control logic.