Research & Papers

Robust Stochastic Gradient Posterior Sampling with Lattice Based Discretisation

New 'Stochastic Gradient Lattice Random Walk' method stabilizes training where current techniques fail, handling heavy-tailed noise.

Deep Dive

A team of researchers including Max Welling and Miranda Cheng propose Stochastic Gradient Lattice Random Walk (SGLRW), a new Bayesian sampling algorithm. It extends Lattice Random Walk discretization to be more robust to minibatch size and gradient noise than standard Stochastic Gradient Langevin Dynamics (SGLD). Validated on regression and classification tasks, SGLRW remains stable in regimes where SGLD fails, improving the reliability of scalable, uncertainty-aware machine learning model training.

Why It Matters

Enables more stable and reliable training of large-scale Bayesian neural networks, crucial for AI applications requiring robust uncertainty estimates.