Emergence of Internal State-Modulated Swarming in Multi-Agent Patch Foraging System
A new study shows AI agents spontaneously form swarms based on internal 'hunger' states, mimicking biological risk-sensitive foraging.
A team of researchers from Radboud University has published a novel study on arXiv demonstrating how simulated AI foragers can develop emergent swarming behavior based on their internal 'hunger' state. The researchers modeled multiple self-propelled agents foraging for resources in a 2D environment with partial observability. Using an evolutionary strategy, they trained a shared Continuous-Time Recurrent Neural Network (CTRNN) policy that controlled the agents' velocities. Crucially, the agents were not explicitly programmed to swarm; this complex collective behavior emerged naturally from the evolutionary training process as an adaptive strategy.
The study's most significant finding is that the strength of the swarming behavior was inversely proportional to the amount of 'resource' stored internally by each agent. In essence, 'hungrier' agents swarmed together more aggressively. Empirical analysis of the CTRNN's hidden states confirmed they encoded this internal resource level. When researchers artificially clamped these states to simulate low resources, the agents' aggregation behavior accelerated. This supports theories of 'risk-sensitive foraging,' where animals alter their social behavior based on internal need, and provides a clear, mechanistic AI model for how such complex group dynamics can self-organize from simple principles.
- AI foragers evolved a shared CTRNN policy via evolutionary strategy, leading to emergent swarming without explicit coordination rules.
- Swarming strength was inversely tied to internal resource levels; 'hungrier' agents aggregated more strongly, modeling biological risk-sensitive foraging.
- Analysis proved the neural network's hidden states encoded resource levels, and manipulating them directly altered swarming behavior.
Why It Matters
This provides a foundational model for developing adaptive, biologically-inspired multi-agent AI systems for robotics, logistics, and complex simulations.