We Need Strong Preconditions For Using Simulations In Policy
New paper warns against simulating marginalized populations without participation or accountability.
A team of researchers including Steven Luo, Saanvi Arora, and Carlos Guirado has published a significant paper on arXiv titled 'We Need Strong Preconditions For Using Simulations In Policy.' The paper addresses growing concerns about the use of LLM agent simulations in policymaking, where AI models simulate human behavior to forecast outcomes and test interventions. While acknowledging the enormous potential of these tools, the authors highlight two critical understudied challenges: the dual-use potential of accurate human behavior models and the fundamental difficulty of validating simulation outputs against real-world results.
In response to these challenges, the researchers propose three specific ethical preconditions for conducting societal-scale LLM agent simulations. First, they argue that simulations of marginalized populations should never be treated as neutral technical outputs, as they risk reinforcing existing biases. Second, they advocate against simulating any population without their meaningful participation in the process. Third, they insist that simulations must include clear accountability mechanisms for both developers and decision-makers. The paper, accepted to the PoliSim Workshop at the 2026 ACM CHI conference, also calls for standardized simulation development and deployment reports to build trust among policymakers and ensure these powerful tools are used responsibly for public benefit.
- Proposes 3 ethical preconditions: no neutral treatment of marginalized group simulations, no simulation without population participation, and no simulation without accountability
- Highlights dual-use risks where accurate human behavior models could be misused
- Calls for standardized development/deployment reports to increase transparency and trust in policy simulations
Why It Matters
As governments increasingly use AI simulations to shape policy, these guardrails could prevent harmful outcomes and build public trust.