Enhancing LLM Problem Solving via Tutor-Student Multi-Agent Interaction
A new multi-agent system achieves state-of-the-art coding accuracy while using significantly fewer tokens.
Researchers Nurullah Eymen Özdemir and Erhan Oztop have introduced a novel AI framework called PETITE that enhances large language model (LLM) performance by mimicking human tutoring dynamics. Instead of using a more powerful model or a complex ensemble, PETITE creates two agents from the same base LLM: one acts as a 'student' that generates and refines code solutions, while the other acts as a 'tutor' that provides structured, evaluative feedback without access to correct answers. This role-differentiated interaction is inspired by developmental psychology, where structured social learning leads to cognitive gains neither party could achieve alone.
In rigorous testing on the challenging APPS coding benchmark, PETITE proved its efficiency and effectiveness. The framework achieved accuracy similar to or higher than established techniques like Self-Consistency, Self-Refine, and Multi-Agent Debate. Crucially, it accomplished this while consuming 'significantly fewer tokens,' making it a more computationally efficient approach. The results suggest that carefully designed multi-agent interaction, grounded in educational principles, is a powerful and resource-conscious paradigm for pushing LLMs beyond their standard solitary performance.
- PETITE uses two agents from one LLM in tutor/student roles, avoiding need for a stronger supervisory model.
- Tested on APPS benchmark, it matched/exceeded SOTA methods like Self-Refine and Multi-Agent Debate.
- Achieved high accuracy while consuming 'significantly fewer tokens,' highlighting major efficiency gains.
Why It Matters
Offers a cost-effective way to boost AI coding assistants and complex reasoning without requiring larger, more expensive models.