AI Safety

Considerations for growing the pie

A viral LessWrong post outlines 9 philosophical arguments against zero-sum AI power struggles.

Deep Dive

A post titled 'Considerations for growing the pie' by Zach Stein-Perlman has gained significant traction on the LessWrong forum, presenting a philosophical and strategic case for cooperative approaches in shaping the future of AI. The core argument contrasts two intervention types: 'growing the pie' (increasing overall capabilities and safety for humanity) versus 'increasing our friends' share of the pie' (seeking relative advantage for a specific value-aligned group). Stein-Perlman outlines nine considerations that generally favor the cooperative, pie-growing approach.

The first major category involves decision-theoretic arguments, drawing heavily on analogies to the prisoner's dilemma and considerations of acausal cooperation. The post suggests that if many actors (including potential AI systems or simulated versions of ourselves) face similar choices, strategies that lead to mutual cooperation create a better expected world for all, even without direct reciprocation. This is bolstered by pragmatic considerations: working on goals perceived as universally good (like AI safety) garners more widespread support and cooperation from the broader world, which is often necessary for large-scale impact.

Further considerations include the idea that worlds where moral reflection leads to convergence among humans are higher-stakes and more worth protecting, the direct ethical weight of others' considered values, and epistemic humility—recognizing that one's own group may not use power perfectly. The post, which cites inspiration from figures like Paul Christiano and Will MacAskill, clarifies that 'growing the pie' encompasses not just preventing catastrophic AI takeover but also advancing fields like metaphilosophy, decision theory, and creating robust deliberative processes for humanity's long-term future.

Key Points
  • Argues for 'growing the pie' (cooperative capability/safety growth) over 'increasing share' (factional power grabs) using 9 distinct considerations.
  • Uses prisoner's dilemma and acausal cooperation models to justify cooperation even without guaranteed reciprocation.
  • Highlights pragmatic benefits: universally good projects like AI safety attract more external support and resources.

Why It Matters

Provides a strategic and ethical framework for AI labs and researchers to prioritize cooperation over competition in development.