Are More Tokens Rational? Inference-Time Scaling in Language Models as Adaptive Resource Rationality
New research reveals AI models can adapt reasoning strategies like humans under pressure.
A new study shows Large Language Models exhibit 'resource rationality'—adapting reasoning strategies based on task complexity—without explicit training on computational costs. When faced with complex logical functions like XOR, instruction-tuned models degrade while Large Reasoning Models remain robust. This suggests adaptive efficiency emerges naturally from inference-time scaling. The research systematically manipulated task variables to observe a transition from brute-force to analytic strategies as complexity increased.
Why It Matters
This could lead to more efficient, human-like AI reasoning that dynamically optimizes computational resources during problem-solving.