You can’t trust violence
A prominent AI safety researcher condemns violence as ineffective and counterproductive to the movement.
In a viral essay on LessWrong, AI safety researcher David Scott Krueger (formerly capybaralet) forcefully argues against the use of violence by those concerned about existential risks from artificial intelligence. The piece was prompted by a recent attack where a Molotov cocktail was thrown at OpenAI CEO Sam Altman, with critics subsequently blaming the broader AI safety community for inciting such actions. Krueger calls this a "ridiculous double standard," noting that while figures like Eliezer Yudkowsky have called for state-enforced policies, the core AI safety movement has consistently advocated for nonviolence. He points to the explicit nonviolent policies of groups like Stop AI and Pause AI as evidence.
Krueger contends that terrorism against AI companies would be strategically counterproductive, helping critics discredit the movement, justifying government crackdowns, and making the "securitization" of AI development more likely, which would obstruct public oversight. He draws parallels to environmentalism, noting that despite decades of intense concern, large-scale "eco-terrorism" against people is rare. His analysis suggests movements typically turn violent due to violent oppression of their members, not purely ideology. The essay concludes by examining the nuanced definition of violence, including property damage, which groups like Stop AI explicitly reject, reinforcing the commitment to lawful, nonviolent advocacy for AI risk reduction.
- The essay was a direct response to violence being blamed on the AI safety community after a Molotov cocktail was thrown at OpenAI's Sam Altman.
- Krueger argues violence is strategically ineffective and would backfire by discrediting the movement and hindering international cooperation on AI governance.
- He cites research on nonviolent resistance and the explicit policies of groups like Stop AI to show the movement's foundational commitment to lawful advocacy.
Why It Matters
As AI safety debates intensify, establishing norms against violence is critical for maintaining legitimate, effective policy advocacy and preventing dangerous escalation.