Agent Frameworks

Hierarachical Multiagent Reinforcement Learning for Multi-Group Tax Game

Multi-agent RL simulates competing governments and households to stabilize tax policies...

Deep Dive

A new arXiv paper tackles the complexity of taxation when multiple governments compete for economic outcomes. The researchers formulate taxation as a hierarchical multi-group game: within each group, the government acts as a leader optimizing fiscal policy in response to household behavior (followers), while across groups, governments compete against each other. This hybrid structure is notoriously hard for standard multi-agent reinforcement learning algorithms to handle.

The team proposes a bi-level training framework built on multi-agent RL, augmented with Curriculum Learning and a Closed-Loop Sequential Update strategy to stabilize training. In their simulation environment, which mirrors classical economic models, the approach learns stable tax policies that benefit all participating groups. Compared to a two-group baseline lacking these updates, their method extends the effective game duration by 60.92% and reduces GDP disparities by 44.12%, signaling more sustainable and equitable fiscal outcomes.

Key Points
  • Models multi-government tax competition as a hierarchical leader-follower game across groups
  • Uses Curriculum Learning and Closed-Loop Sequential Update to stabilize multi-agent RL training
  • Achieves 60.92% longer game duration and 44.12% reduction in GDP disparities vs baseline

Why It Matters

Enables AI-driven simulation of realistic multi-government tax policy, helping design more stable and equitable fiscal systems.