Decentralized Proximal Stochastic Gradient Langevin Dynamics
New algorithm handles constraints via Moreau-Yosida envelope with provable convergence.
Mohammad Rafiqul Islam and Lingjiong Zhu have introduced DE-PSGLD, a decentralized Markov chain Monte Carlo algorithm designed to sample from log-concave probability distributions that are restricted to convex domains. Traditional MCMC methods often struggle with constraints in distributed settings, requiring centralized coordination or expensive projections. DE-PSGLD overcomes this by employing a shared proximal regularization derived from the Moreau-Yosida envelope, which effectively transforms the constrained problem into an unconstrained one while maintaining consistency with the target posterior. This makes it suitable for decentralized networks of agents that collaborate without a central server.
The paper establishes non-asymptotic convergence guarantees in the 2-Wasserstein distance, covering both individual agent iterates and their network averages. The analysis quantifies the bias introduced by the proximal approximation and shows that DE-PSGLD converges to a regularized Gibbs distribution. Tested on synthetic and real datasets, the algorithm demonstrates fast posterior concentration and high predictive accuracy. As the first decentralized approach specifically designed for constrained domains, DE-PSGLD opens new possibilities for privacy-preserving Bayesian inference and distributed learning problems where constraints are critical, such as resource allocation, robot coordination, and sensor networks.
- Uses Moreau-Yosida envelope for proximal regularization to enforce convex constraints.
- Provides non-asymptotic convergence guarantees in 2-Wasserstein distance for individual and averaged iterates.
- First decentralized MCMC algorithm for constrained domains, tested on synthetic and real datasets.
Why It Matters
Enables privacy-preserving distributed Bayesian inference with provable convergence on constrained probability spaces.