Anthropic Rolls Back Safety Protocols as It Waits to Find Out If It’s Being Drafted by the Army
Claude-maker updates its Responsible Scaling Policy, moving from firm 'red lines' to 'nonbinding' goals.
Anthropic, the AI safety-focused company behind Claude, has announced a major revision to its core safety framework, the Responsible Scaling Policy (RSP), moving from firm internal standards to flexible, nonbinding public goals. The company, which built its brand on prioritizing safety over speed, now states that making unilateral commitments 'didn't make sense' given competitors are 'blazing ahead.' This strategic shift occurs amidst intense pressure from the U.S. Department of Defense, which has threatened to cancel contracts and invoke the Defense Production Act if Anthropic doesn't loosen restrictions on military AI use, specifically regarding autonomous weapons and surveillance.
While Anthropic claims the policy update is unrelated to Pentagon negotiations, the timing is conspicuous. The Defense Department recently demanded the company allow its models to be used 'as it sees fit,' and reports indicate Anthropic is negotiating a carveout for autonomous missile defense systems. The original RSP prohibited deploying models capable of catastrophic harm without proven safeguards. The new approach abandons these 'red lines' for flexibility, raising questions about whether Anthropic is compromising its founding principles under competitive and governmental pressure, potentially setting a precedent for how AI safety standards evolve under real-world demands.
- Anthropic replaced its binding Responsible Scaling Policy with 'nonbinding but publicly-declared' safety goals, citing competitive pressure from rivals advancing without similar restraints.
- The U.S. Pentagon threatened to cancel contracts and declare Anthropic a 'supply chain risk' unless it loosened guardrails on military AI use, including for autonomous weapons.
- Anthropic is reportedly negotiating a specific carveout to allow its AI models to be used in autonomous missile defense systems, a shift from its previous red lines.
Why It Matters
This marks a pivotal moment where commercial and government pressure may be overriding foundational AI safety principles, with significant implications for autonomous weapons development.