A Solicit-Then-Suggest Model of Agentic Purchasing
New research proves a few targeted questions from an AI agent can replace dozens of product recommendations.
A new research paper from Shengyu Cao and Ming Hu, titled 'A Solicit-Then-Suggest Model of Agentic Purchasing,' provides a rigorous mathematical framework for the emerging shift from search-based to conversational e-commerce. The model formalizes how an AI shopping agent operates: it first conducts 'm' rounds of targeted questioning (solicitation) to refine its belief about a customer's ideal product within a 'd'-dimensional preference space. It then recommends a tailored assortment of 'k' products (suggestion) from which the customer chooses. The core analysis reveals a fundamental economic trade-off and substitutability between the depth of conversation and the breadth of the product list.
Under a Gaussian prior assumption, the researchers establish a crucial 'uncertainty decomposition.' They prove that solicitation depth and assortment breadth are substitutes, with total prior uncertainty split between what the conversation resolves and what the product range hedges against. The efficiency gap is stark: expected mismatch loss decreases on the order of 1/m with more questions, but only on the order of k^(-2/d) with more recommendations. This 'curse of dimensionality' means that for complex preferences (high 'd'), adding more products becomes exponentially less effective. Consequently, a handful of intelligent questions can achieve what would otherwise require a massive, unwieldy list of recommendations.
The paper also characterizes the optimal policy for these AI agents. The ideal product assortment forms a Voronoi partition, strategically grouping products to serve specific regions of the refined preference space. For a single recommended product, the optimal questioning strategy follows a 'water-filling' rule that equalizes posterior uncertainty across all preference dimensions. With multiple products, the agent can smartly allocate less precision to dimensions that the assortment itself can hedge. This water-filling rule provides a strong general approximation guarantee for larger assortments, with the performance gap vanishing as dimensionality increases. The researchers note that the core findings—the uncertainty decomposition and the substitutability of questions and products—hold true even for non-Gaussian priors, underscoring the model's robustness.
- Proves conversation is exponentially more efficient than listing products: match quality improves at rate 1/m with questions vs. k^(-2/d) with more items.
- Identifies optimal AI agent policy: uses a 'water-filling' rule for questions and a Voronoi partition for product recommendations.
- Formalizes the trade-off: 'solicitation depth' (questions) and 'assortment breadth' (recommendations) are direct substitutes for reducing customer uncertainty.
Why It Matters
Provides the mathematical blueprint for the next generation of AI shopping assistants, moving beyond keyword search to efficient, conversational commerce.