Taking political violence seriously
A viral LessWrong post argues AI safety experts are dangerously dismissing the logic of violent opposition.
In a post titled 'Taking political violence seriously,' AI safety researcher Eliana Du argues that the AI safety community is dangerously underestimating the appeal and perceived logic of violent opposition to AGI development. She points to a conversation with a friend who claimed that if he truly believed AGI would lead to catastrophe, he would feel compelled to kill researchers and bomb data centers to stop it. Du uses this to illustrate that for some, political violence isn't an abstract moral failing but a seemingly rational, if horrific, response to an existential threat.
Du directly critiques prominent community figures like Zvi Mowshowitz and Eliezer Yudkowsky for using what she calls ineffective counterarguments. She argues that declaring violence 'never acceptable' or comparing killing researchers to 'killing puppies to cure cancer' fails to engage with the core belief that such acts could be pragmatically effective. Her post has sparked intense debate on LessWrong, forcing a community that typically operates on rationalist principles to confront the emotional and potentially violent reactions its work can provoke.
- Researcher Eliana Du warns the AI safety community is inadequately addressing the perceived logic behind political violence aimed at halting AGI development.
- The post uses a real conversation where a friend justified hypothetical violence against researchers as a rational response to preventing an existential catastrophe.
- Du critiques common rhetorical strategies against violence as ineffective, arguing they fail to substantively counter the belief that violence could work.
Why It Matters
Highlights a critical blind spot in AI safety discourse, forcing a confrontation between abstract ethics and the raw, potentially dangerous human reactions to existential risk.