Media & Culture

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The proposed law would protect companies from lawsuits over AI-caused disasters causing 100+ deaths or $1B+ in damage.

Deep Dive

OpenAI is actively supporting Illinois Senate Bill 3444, legislation that would create a significant legal shield for developers of frontier AI models. The bill defines a 'frontier model' as one trained using over $100 million in compute costs—a threshold that includes major players like OpenAI, Google, Anthropic, xAI, and Meta. Under SB 3444, these companies would be protected from liability for 'critical harms'—defined as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage—provided they published safety, security, and transparency reports and did not intentionally or recklessly cause the incident. This represents a notable shift in OpenAI's legislative strategy from playing defense against liability proposals to proactively backing a specific, industry-friendly framework.

The bill's definition of critical harm includes scenarios like a bad actor using an AI model to create a chemical, biological, radiological, or nuclear (CBRN) weapon, or an AI model autonomously committing acts that would be criminal if done by a human. In testimony, OpenAI's Global Affairs team argued such state laws can be effective if they align with a future federal system, emphasizing the need to avoid a 'patchwork' of state rules and to preserve U.S. leadership in AI innovation. However, the bill faces steep opposition; a poll by the Secure AI project found 90% of Illinois residents oppose exempting AI companies from liability, and the state has a history of aggressive tech regulation, including pioneering laws on biometric data and AI in mental health services.

Key Points
  • The bill (SB 3444) would protect AI labs from liability for 'critical harms'—defined as 100+ deaths/serious injuries or $1B+ in damage—if they publish safety reports and aren't reckless.
  • It defines 'frontier models' as those trained with over $100M in compute, covering OpenAI, Google, Anthropic, Meta, and xAI.
  • This marks a strategic shift for OpenAI from opposing liability rules to backing a specific shield, arguing it avoids a state-by-state 'patchwork' and preserves U.S. innovation leadership.

Why It Matters

This sets a potential legal precedent for how AI companies are held accountable for catastrophic harms caused by their most powerful systems.