Optimal Functional Incentives for Control: The Linear-Quadratic Case with Bilinear Incentives
New paper derives closed-form solutions for one-shot incentive design in long-horizon dynamic systems.
This paper tackles a fundamental control problem: how can a leader design a one-time incentive function that steers a self-interested follower to manage a dynamical system optimally over an extended horizon? Unlike adaptive incentives that update in real time, the leader here commits to a fixed functional form. The authors formalize this as a discrete-time bi-level optimal control problem and derive analytical results for the linear-quadratic (LQ) case with bilinear incentives and a myopic follower.
The key contributions include a necessary and sufficient stability condition for the closed-loop system, a closed-form expression for the gradient of the expected leader cost with respect to the incentive parameter matrix, and a fully closed-form cost expression for scalar systems. Using this, they characterize the optimal incentive in two asymptotic regimes: the infinite-horizon limit and the limit of high follower cost. Remarkably, for long horizons, the optimal incentive becomes independent of the follower's private cost parameter—a result with direct implications for robust mechanism design under private information. The paper is submitted to IEEE CDC 2026 and available on arXiv.
- Derives necessary and sufficient stability condition for the induced closed-loop system with bilinear incentives.
- Provides closed-form gradient of leader cost w.r.t. incentive parameter matrix, enabling efficient optimization.
- For long horizons, optimal incentive becomes independent of follower's private cost, enabling robust design without knowing follower's preferences.
Why It Matters
Enables robust, fixed-incentive contracts for autonomous systems without needing real-time updates or private follower information.