AI Safety

What happens after we stop AI?

What if we actually stopped AI? The debate heats up.

Deep Dive

The article poses a critical question: what happens after we stop AI? It uses a vivid analogy of a burning house to argue that the immediate priority should be to 'put out the fire'—halt AI advancement—before debating long-term strategies like rebuilding or prevention. The author acknowledges that deeper questions about AI's role in society are contentious but insists they can wait until after a pause.

During an indefinite pause, the author suggests a reckoning with the systemic issues that led to the AI race, including competitive pressures driving reckless development. They advocate for collective decision-making and a new 'bill of rights' for the information age, addressing privacy, accountability, and human dignity. Specific ideas include the right to talk to a person when dealing with large organizations and the right to appeal automated decisions. The goal is to shift from competitive to collective interests in shaping AI's future.

Key Points
  • Stopping AI is likened to putting out a house fire: urgent and necessary before planning next steps.
  • An indefinite pause should include a reckoning with competitive pressures that drove AI development.
  • Proposes a new 'bill of rights' for the information age, including rights to human interaction and appeal automated decisions.

Why It Matters

This challenges the tech community to consider pausing AI for safety, not just progress.