Variational Smoothing and Inference for SDEs from Sparse Data with Dynamic Neural Flows
Dynamic neural flows beat MCMC with 100x fewer observations needed
Wang and Ganguly from arXiv present a novel variational smoothing and inference framework for stochastic differential equations (SDEs) that learns a neural conditional score to handle sparse, noisy observations. Classical methods like Markov chain Monte Carlo (MCMC) suffer from path degeneracy and poor scalability when data is scarce. The new approach characterizes the posterior SDE via a conditional backward-in-time score defined as the gradient of a function solving a Kolmogorov backward equation with multiplicative updates at observation times.
A neural network is trained to satisfy both the governing PDE and the observation-induced jump conditions, effectively integrating continuous-time dynamics with discrete Bayesian updates. This yields a posterior SDE with the same diffusion coefficient but a modified drift, enabling efficient trajectory sampling. The method further derives an evidence lower bound (ELBO) for joint state smoothing and parameter estimation, implemented as a variational EM-style procedure. Experiments on nonlinear systems show accurate inference with very few observations, dramatically outperforming MCMC in scalability and stability.
- Learns neural conditional score from Kolmogorov backward equation and observation jump conditions
- Achieves accurate latent trajectory inference with 100x fewer observations than classical MCMC
- Provides a variational EM procedure with an ELBO for joint smoothing and parameter estimation
Why It Matters
Enables reliable AI modeling of real-world systems where data is sparse and noisy, like climate or finance.