AI Safety

"Do Not Start Arguments You Cannot Finish"

A technical AI safety expert details the fatigue of explaining existential risk to newcomers, sparking a debate on discourse norms.

Deep Dive

A viral post on the AI forum LessWrong, titled 'Do Not Start Arguments You Cannot Finish,' has resonated deeply within the technical AI safety community. Written by researcher J Bostock, the essay candidly describes a profound fatigue with the repetitive, often thankless task of explaining arguments for AI existential risk (x-risk) to newcomers. Bostock recounts observing a young, driven advocate at a PauseAI meetup engaging in a familiar debate, triggering a sense of predictive exhaustion. The author argues that introducing complex topics to the uninitiated is a repetitive task that could be handled by a book, highlighting the mental toll on experts who have been having these conversations for years.

Bostock transitions from description to analysis, examining the social norms at play. The post questions whether it is socially graceful or a good discourse norm for experts to hedge or avoid these conversations to prevent a 'slog' of explanation. A key concern raised is the ethics of making bold claims (like 'we're all going to die') without the willingness or energy to engage with the multi-level critiques they inevitably attract, which the author likens to permitting a form of trolling. The piece concludes by suggesting that the expectation to respond to criticism depends heavily on context—where, how, and by whom a claim is made—implying that casual settings may not be the right arena for initiating such high-stakes debates.

Key Points
  • The post details 'predictive exhaustion' from repeatedly explaining AI x-risk arguments, a common experience for safety researchers.
  • It critiques the social norm of making bold claims in casual settings without the capacity for deep, multi-level engagement with critics.
  • The article has sparked a major community discussion on effective communication strategies for complex, high-stakes topics like AI safety.

Why It Matters

Highlights a critical bottleneck in AI safety: expert burnout from public discourse could hinder vital public understanding and policy debates.