Beyond Self-Interest: Modeling Social-Oriented Motivation for Human-like Multi-Agent Interactions
New AI agents balance self-interest with altruism, making multi-agent simulations 40% more human-like.
A research team from Peking University and UCLA has published a paper introducing Autonomous Social Value-Oriented (ASVO) agents, a novel framework for creating more human-like multi-agent interactions. The core innovation is integrating Large Language Models (LLMs) with established psychological theory—specifically Social Value Orientation (SVO). Unlike typical AI agents that operate on pure self-interest, ASVO agents maintain a dynamic internal model of multi-dimensional desires (like achievement or affiliation) and, crucially, estimate the satisfaction levels of other agents. By contrasting their own fulfilled desires against their perception of others' satisfaction, each agent computes a real-time SVO score, positioning itself on a spectrum from purely altruistic to purely competitive. This score then guides their activity selection, creating a balance between personal goal fulfillment and social alignment.
The system was rigorously tested across three complex social environments: a School, a Workplace, and a Family setting. In these simulations, agents performed tasks like completing assignments, collaborating on projects, and managing household responsibilities. The ASVO framework demonstrated "substantial improvements" over standard LLM-based agent baselines in key metrics of behavioral naturalness and human-likeness. The researchers attribute this success to the structured desire system, which provides coherent internal motivation, and the adaptive "SVO drift," which allows agents to contextually adjust their social stance. For example, an agent might become more cooperative when it perceives a teammate is struggling, or more competitive when resources are scarce.
This work, accepted for an oral presentation at the AAMAS 2026 conference, represents a significant step beyond current multi-agent systems, which often lack nuanced social mechanics. By grounding agent behavior in psychological theory, the ASVO model moves AI simulations closer to capturing the complex, motivation-driven nature of real human social dynamics. The framework opens new avenues for creating believable non-player characters in games, sophisticated training environments for soft skills, and powerful tools for computational social science, where simulating realistic group behavior is paramount.
- Agents use Social Value Orientation (SVO) theory to dynamically shift between altruistic and competitive behaviors based on context.
- The system showed substantial improvements in human-likeness across School, Workplace, and Family simulation environments.
- Agents maintain a multi-dimensional desire system (e.g., for achievement, affiliation) and estimate others' satisfaction to guide social decisions.
Why It Matters
Enables vastly more realistic AI simulations for gaming, professional training, and social science research, moving beyond simplistic self-interested agents.