An Efficient Black-Box Reduction from Online Learning to Multicalibration, and a New Route to $\Phi$-Regret Minimization
New paper bridges online learning and multicalibration with a simple reduction.
Gabriele Farina and Juan Carlos Perdomo's new paper presents a Gordon-Greenwald-Marks (GGM) style black-box reduction that transforms online learning into online multicalibration. The key insight: combining any no-regret learner over a function class H with an expected variational inequality (EVI) solver yields high-dimensional multicalibration guarantees. This elegantly resolves the main open question from Garg, Jung, Reingold, and Roth (SODA '24), proving that oracle-efficient online multicalibration with √T-type guarantees is possible in full generality. The reduction also unifies existing multicalibration algorithms, enabling robust performance in challenging environments like delayed observations or censored outcomes, and delivers the first efficient black-box reduction between online learning and multiclass omniprediction.
The second major result establishes a fine-grained reduction from high-dimensional online multicalibration to (contextual) Φ-regret minimization. This creates a new pathway from external regret to Φ-regret that bypasses the complex fixed-point or semi-separation machinery required by prior work, dramatically simplifying a result from Daskalakis, Farina, Fishelson, Pipis, and Schneider (STOC '25) while improving convergence rates. The approach yields algorithms robust to richer deviation classes, including those defined by any reproducing kernel Hilbert space (RKHS). For practitioners, this means more efficient and theoretically grounded methods for ensuring fairness and calibration in online learning systems, with direct applications in adaptive decision-making, personalized recommendations, and any scenario requiring robust predictions under distribution shift.
- Resolves open question from Garg et al. (SODA '24) on oracle-efficient online multicalibration with √T-type guarantees.
- Provides first efficient black-box reduction between online learning and multiclass omniprediction.
- Simplifies and improves rates over Daskalakis et al. (STOC '25) by bypassing fixed-point machinery, enabling RKHS-based deviation classes.
Why It Matters
Simplifies fairness and calibration in online learning, enabling robust algorithms for delayed or censored data.