Pliable rejection sampling
New PRS method guarantees sample acceptance rates with i.i.d. outputs
Researchers Akram Erraqabi, Michal Valko, Alexandra Carpentier, and Odalric-Ambrym Maillard have introduced pliable rejection sampling (PRS), a new technique for sampling from difficult probability distributions. Published on arXiv on April 24, 2026, and presented at ICML 2016, PRS addresses the core limitation of traditional rejection sampling: high rejection rates that make it inefficient for many real-world applications. The key innovation is using a kernel estimator to learn the sampling proposal adaptively, rather than relying on fixed or hand-crafted proposals.
PRS maintains the theoretical guarantees of rejection sampling, producing samples that are with high probability independent and identically distributed (i.i.d.) according to the target distribution f. Crucially, it provides a performance guarantee on the number of accepted samples, a feature missing from many adaptive methods that either work only for specific distributions or lack such assurances. This makes PRS suitable for machine learning tasks like Bayesian inference, generative modeling, and Monte Carlo simulation where efficient sampling is critical. The method builds on kernel density estimation to dynamically adjust the proposal, improving acceptance rates without sacrificing sample quality.
- PRS uses a kernel estimator to learn the sampling proposal adaptively, boosting acceptance rates over traditional rejection sampling
- Samples are with high probability i.i.d. and distributed according to target distribution f, with a guarantee on accepted samples
- Presented at ICML 2016, the method works for general distributions without the restrictions of common adaptive methods
Why It Matters
PRS makes rejection sampling practical and efficient for machine learning tasks like Bayesian inference and generative modeling.