Research & Papers

PETS: A Principled Framework Towards Optimal Trajectory Allocation for Efficient Test-Time Self-Consistency

New research slashes compute budgets needed for reliable AI outputs by optimizing how many 'thoughts' each question gets.

Deep Dive

Researchers from multiple institutions introduced PETS (Principled and Efficient Test-Time Self-Consistency), a framework that optimizes how many reasoning 'trajectories' an AI model uses per query. It connects the problem to crowdsourcing theory, treating different reasoning paths as workers. In tests on the GPQA benchmark, PETS achieved perfect self-consistency while reducing the required sampling budget by 75% in offline settings and 55% in online streaming scenarios compared to uniform allocation.

Why It Matters

Dramatically lowers the compute cost of making AI outputs reliable, enabling more affordable deployment of advanced reasoning models.