We can prevent progress! Conceptual clarity, and inspiration from the FDA
A viral LessWrong post challenges the inevitability of AI progress, citing historical tech slowdowns.
In a viral post on the AI discussion forum LessWrong, researcher Katja Grace directly challenges the common tech industry refrain that 'we can’t prevent progress.' She argues this phrase conflates two distinct concepts: the mere increase of technological capability and genuine societal improvement. By labeling all technological advancement as 'progress,' the debate is unfairly biased from the start, making the desire to govern AI seem inherently 'backward.' Grace insists that slowing or halting specific technologies is not only possible but has historical precedent.
Grace draws a powerful analogy to the U.S. Food and Drug Administration (FDA), which systematically slows pharmaceutical development to ensure safety, asking why a similar model is considered unthinkable for AI. She provides a substantial list of technologies that have been significantly slowed or halted, including human cloning, certain genetic modifications (like CRISPR babies in China), geoengineering, and recombinant DNA research following the Asilomar Conference. The post concludes that societal mechanisms for governing technology exist and have been used, making the case that intentional, coordinated slowdown of AI is a feasible policy option, not a fantasy.
- Challenges the conflation of 'technological increase' with 'progress,' arguing the latter implies improvement, not just capability.
- Cites the FDA as a prime example of an institution that successfully slows tech (pharmaceuticals) for public safety.
- Lists historical precedents like halted human cloning, regulated gene drives, and the Asilomar moratorium as proof that tech progress can be governed.
Why It Matters
Provides a conceptual framework for policymakers and critics arguing that aggressive AI development is not inevitable and can be regulated.