Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed
Claude-maker pushes back on legislation requiring AI companies to pay for all harms.
Anthropic, the company behind the Claude AI models, has taken a public stance against California's proposed AI liability legislation, Senate Bill 1047. The bill, which has received support from OpenAI, would establish strict liability for AI developers, making them financially responsible for any harms caused by their models exceeding a $500 million computational threshold. This represents a significant fracture in what has often been unified industry lobbying efforts on AI policy.
Anthropic's opposition centers on the bill's "extreme liability" framework. The company argues that holding developers liable for all harms, even when they have implemented reasonable safety precautions, creates an untenable risk that could cripple the domestic AI industry. In a letter to lawmakers, Anthropic warned that such measures would likely drive innovation and investment out of California and potentially the United States, handing advantage to international competitors with less stringent regulations.
The debate highlights a fundamental tension in AI governance: balancing public safety against technological progress. Proponents of SB 1047, including some AI safety advocates, argue that strong liability is necessary to ensure companies build safe models and have "skin in the game." Anthropic counters that the current bill is overly broad and could punish responsible developers for unforeseeable misuse of their technology. The company is advocating for a more nuanced approach that distinguishes between negligent deployment and unavoidable emergent risks.
This policy split between two leading AI labs—Anthropic and OpenAI—signals evolving corporate strategies as regulation looms. While both companies emphasize AI safety, their divergence on liability suggests different calculations about operational risk and competitive positioning. The outcome in California, a global tech policy bellwether, could set precedents affecting AI development nationwide and influence how other jurisdictions approach the complex challenge of governing advanced artificial intelligence.
- Anthropic opposes CA SB 1047, which would impose strict liability for harms from AI models over $500M in compute.
- The bill creates an industry split, as OpenAI has supported the liability framework while Anthropic calls it 'extreme.'
- Anthropic warns the law could stifle US innovation and push AI development to less-regulated international markets.
Why It Matters
This regulatory fight could determine whether AI companies face unlimited liability, shaping where and how advanced AI is built.