Adaptive Querying with AI Persona Priors
New method uses AI personas to ask fewer questions while learning user preferences accurately.
A new ICML 2026 paper by Kaizheng Wang, Yuhang Wu, and Assaf Zeevi tackles the challenge of adaptive querying—sequentially selecting questions to efficiently learn user-specific quantities like responses to held-out items or psychometric indicators—within tight question budgets. Classical approaches such as Bayesian experimental design and computerized adaptive testing often rely on restrictive parametric assumptions (e.g., normal priors) or computationally expensive posterior approximations (e.g., MCMC), making them impractical for heterogeneous, high-dimensional, and cold-start settings.
The authors propose a persona-induced latent variable model that represents a user's state through membership in a finite dictionary of AI personas. Each persona comes with response distributions generated by a large language model, yielding expressive priors. Crucially, this structure enables closed-form posterior updates and efficient finite-mixture predictions, enabling scalable Bayesian design for sequential item selection. The method is evaluated on synthetic data and WorldValuesBench, showing that persona-based posteriors deliver accurate probabilistic predictions and an interpretable adaptive elicitation pipeline—all while avoiding expensive computation.
- Represent users via membership in a finite dictionary of LLM-generated AI personas, each with precomputed response distributions.
- Enables closed-form posterior updates and finite-mixture predictions, eliminating the need for MCMC or variational inference.
- Demonstrated on WorldValuesBench, achieving accurate probabilistic predictions with an interpretable adaptive questioning pipeline.
Why It Matters
Enables personalized adaptive testing (e.g., psychometrics, surveys) with fewer questions and interpretable results.