Neural Control: Adjoint Learning Through Equilibrium Constraints
Uses adjoint method to compute gradients without unrolling solver iterations, solving multi-stable equilibrium problems.
Many physical AI tasks, such as bending a deformable linear object (DLO) to a target shape, involve implicit equilibrium: an agent actuates boundary degrees of freedom while the rest settle by minimizing total potential energy. These systems exhibit strongly nonlinear behavior due to multi-stability, where the same boundary conditions can yield multiple equilibrium shapes depending on the actuation trajectory. Conventional learning and control approaches are brittle because the actuation-to-configuration map is defined implicitly, and naive backpropagation through iterative equilibrium solvers is memory- and compute-intensive.
Neural Control addresses this by deriving memory-efficient proxy gradients from the equilibrium conditions via an adjoint formulation, completely avoiding unrolling of solver iterations. These sensitivities are then integrated into a receding-horizon model predictive control (MPC) scheme that repeatedly re-anchors optimization to realized equilibria, mitigating basin-switching in multi-stable regimes. The team evaluated the framework in simulation and on physical robots manipulating DLOs, demonstrating improved performance over gradient-free baselines such as SPSA and CEM. This work opens new possibilities for robust, compute-efficient control of highly nonlinear deformable systems.
- Adjoint differentiation computes proxy gradients without unrolling iterative solvers, dramatically reducing memory and compute.
- Integrated into a receding-horizon MPC that re-anchors to realized equilibria, improving robustness over long horizons.
- Outperforms gradient-free methods (SPSA, CEM) on both simulated and physical deformable linear object manipulation tasks.
Why It Matters
Enables more reliable, compute-efficient robotic control of flexible objects like cables and ropes in manufacturing and automation.