Target Mirror Descent: A Unifying Framework for Solving Monotone Variational Inequalities
A new framework stabilizes monotone flows and unifies extragradient, proximal point, and more.
Researchers Yu-Wen Chen, Can Kizilkale, and Murat Arcak have introduced Target Mirror Descent (TMD), a novel framework that unifies several landmark algorithms for solving monotone variational inequalities. TMD stabilizes the notoriously unstable mirror descent method by adding a target point correction mechanism in the dual update. This simple adjustment allows TMD to recover the proximal point algorithm, extragradient methods, splitting methods, Brown-von Neumann-Nash dynamics, forward-backward-forward dynamics, and discounted mirror descent as special cases. The framework also corrects an equilibrium misalignment in discounted mirror descent and generalizes its higher-order extension beyond interior solutions.
A key innovation in TMD is the explicit decoupling of the mirror map from the target determination. This enables geometric ensembles, where multiple algorithms solve the same problem in parallel using distinct mirror maps while sharing a common dual update. The authors prove that such an ensemble reduces to a single TMD with a synthesized mirror map, inheriting its convergence guarantees. This work provides a unified perspective on optimization algorithms and offers practical tools for improving convergence in machine learning and game theory applications.
- TMD unifies 6+ landmark algorithms (proximal point, extragradient, splitting methods, BNND, FBF, discounted mirror descent) under one framework
- It stabilizes monotone flows via a target point correction mechanism in the dual update
- Geometric ensembles allow parallel solving with distinct mirror maps, reducing to a single TMD with synthesized guarantees
Why It Matters
TMD provides a unified theoretical foundation for optimization algorithms, enabling more stable and efficient training of AI models.