AI Safety

Law Proofing the Future

Contrarian paper says chasing tech with new laws entrenches incumbents and stifles innovation.

Deep Dive

In a contrarian legal paper titled 'Law Proofing the Future,' author Gregory M. Dickinson challenges the prevailing narrative that lawmakers must 'future-proof' laws against technologies like generative AI (e.g., ChatGPT) and algorithmic decision-making. Dickinson argues the historical pattern is clear: from the printing press to deepfakes, technological breakthroughs provoke wonder, then fear, then reactive legislation. These new legal regimes, he contends, often entrench market incumbents, suppress open experimentation, and replace stable, general legal principles with 'bespoke but brittle' rules.

Drawing from history, economics, and legal theory, Dickinson posits that the most effective governance tools already exist in the form of general-purpose common law. These principles, predating modern tech, offer the virtues of generality, stability, and judicial adaptability. The paper highlights the 'epistemic limits' of technological forecasting and the hidden costs of early legislative intervention, such as regulatory capture and biased enforcement. His central thesis is that the law must not chase technology; instead, a defense of legal restraint is necessary to preserve the conditions for freedom and equal justice, allowing both law and technology to evolve organically.

Key Points
  • Identifies a historical cycle where tech (printing press, ChatGPT) leads to fear-based, innovation-stifling laws.
  • Argues existing common law is superior to new, brittle AI-specific regulations for governing technological change.
  • Calls for legal restraint to avoid regulatory capture and preserve conditions for technological and legal evolution.

Why It Matters

Provides a foundational argument against rushed AI regulation, influencing policy debates and tech industry legal strategy.