Sampling on Discrete Spaces with Temporal Point Processes
A neuroscience-inspired AI sampler outperforms traditional methods in 63 distribution tests, offering faster convergence.
Researchers from University College London's Gatsby Computational Neuroscience Unit have introduced a novel framework for sampling from complex discrete distributions. The method, developed by Cameron Stewart and Maneesh Sahani, constructs a multivariate temporal point process—modeled as a system of coupled infinite-server queues—whose event counts converge to a target distribution over time. This approach introduces a form of 'discrete momentum' that suppresses inefficient random-walk behavior common in other samplers, and it supports both reversible and non-reversible dynamics for greater flexibility.
In practical application, the team derived a recurrent stochastic neural network whose dynamics implement this sampling-based computation, exhibiting biologically plausible features like refractory periods and oscillations. The sampler's performance was rigorously tested against established methods. Across 63 different target distributions, it consistently outperformed standard birth-death processes and frequently surpassed more advanced Zanella processes in terms of multivariate effective sample size. These efficiency gains were even more pronounced when performance was normalized by CPU time, highlighting its computational advantage.
- Outperformed birth-death and Zanella processes in 63/63 distribution tests for effective sample size.
- Frames sampling as coupled infinite-server queues, introducing 'discrete momentum' to suppress random walks.
- Enables a recurrent neural network implementation with biologically plausible features like oscillatory dynamics.
Why It Matters
Provides a faster, more efficient engine for probabilistic AI models, crucial for applications in drug discovery and complex system simulation.