Functional Central Limit Theorem for Stochastic Gradient Descent
New functional limit theorem models SGD's entire path, not just final results, enabling better analysis.
Researchers Kessang Flamand and Victor-Emmanuel Brunel published "Functional Central Limit Theorem for Stochastic Gradient Descent" (arXiv:2602.15538). Their work proves that SGD trajectories, when properly scaled, converge to diffusion processes. This captures temporal fluctuations around minimizers, unlike classical CLTs that only analyze final iterates. The theorem applies to non-smooth convex objectives like robust location estimation, providing a complete statistical description of SGD's dynamic behavior throughout optimization.
Why It Matters
Enables more precise theoretical analysis of training dynamics and uncertainty quantification for real-world ML models.