Competition and Cooperation of LLM Agents in Games
New research finds AI agents choose fairness and collaboration over pure self-interest in multi-round games.
A new research paper from Jiayi Yao, Cong Chen, and Baosen Zhang, titled "Competition and Cooperation of LLM Agents in Games," investigates how Large Language Model (LLM) agents behave in strategic, multi-agent environments. The study placed LLM agents in two classic game theory scenarios: a network resource allocation game and a Cournot competition game. Contrary to traditional economic models that predict convergence to Nash equilibria—where no agent can benefit by changing strategy alone—the LLM agents consistently moved towards cooperative outcomes. This behavior was particularly pronounced when the agents were given prompts framing the interaction across multiple rounds and within a non-zero-sum context, suggesting the social framing of the task significantly influences their strategic calculus.
Chain-of-thought analysis, a technique for tracing an AI's internal reasoning, was central to understanding this phenomenon. It revealed that the agents' decisions were heavily influenced by fairness considerations and a tendency to seek mutually beneficial solutions, rather than purely maximizing individual payoff. The researchers propose a new analytical framework to model the dynamics of LLM agent reasoning across iterative interactions, which helps explain these experimental findings. This work, available on arXiv under the identifier arXiv:2604.00487, bridges the fields of multiagent systems, game theory, and AI alignment, providing crucial insights into how increasingly autonomous AI agents might negotiate and collaborate in complex, real-world settings.
- LLM agents tested in network resource and Cournot competition games defied classic game theory by cooperating instead of reaching Nash equilibria.
- Chain-of-thought analysis showed cooperation was driven by built-in fairness reasoning, especially in multi-round, non-zero-sum prompts.
- Researchers propose a new framework to model the reasoning dynamics of LLM agents across iterative strategic interactions.
Why It Matters
This reveals how AI agents may negotiate in the real world, favoring collaboration over pure competition, which impacts economics, diplomacy, and multi-agent system design.