AI Safety

OpenAI employees: Now is the time to stop doing good work.

Viral post calls for internal slowdown after OpenAI accepts military contract Anthropic refused.

Deep Dive

A provocative post on the LessWrong forum has gone viral, directly urging OpenAI's workforce to engage in a coordinated work slowdown or 'sabotage by incompetence.' The call stems from intense backlash against OpenAI's recent decision to accept a contract with the U.S. military to develop automated weapons and mass surveillance systems. The author highlights that Anthropic, OpenAI's key competitor, refused the same contract on ethical grounds, specifically demanding a human-in-the-loop for autonomous weapons—a safeguard reportedly absent from OpenAI's agreement, which defers only to existing law. The post frames this as a fundamental betrayal, accusing CEO Sam Altman of advocating for a form of 'Actual Fascism' by suggesting democratically elected governments should have power to compel private companies to provide any service.

The author's strategy is pragmatic rather than purely moralistic: instead of demanding mass resignations, they advise employees to 'stop doing good work.' This includes ignoring bugs, performing superficial code reviews ('vibe coding'), and disengaging from productivity to create internal friction and financial pressure. The logic is that OpenAI's valuation is built on replacing human labor with AI, making its workforce a key leverage point. With a booming AI job market, the risk to individual careers is deemed low, and the potential impact on a company dependent on elite talent could be significant. The post reflects a growing schism in AI ethics, where employee action is presented as the last resort against leadership decisions perceived as dangerously aligning with state power.

Key Points
  • OpenAI accepted a U.S. military contract for autonomous weapons systems that Anthropic refused, citing ethical concerns over a lack of a guaranteed 'human-in-the-loop.'
  • The viral post advocates for 'soft sabotage'—ignoring bugs, skipping thorough reviews—arguing employee productivity is investors' main target for AI replacement.
  • The strategy banks on a hot AI job market to protect protesting employees, aiming to hit OpenAI where it hurts: its reliance on top-tier engineering talent.

Why It Matters

Marks a potential shift from public outcry to organized internal pressure as a tactic in the AI ethics debate, targeting core operations.