AI Researchers' Views on Automating AI R&D and Intelligence Explosions
80% of leading AI researchers identify automating AI R&D as a severe and urgent risk in explosive growth scenarios.
A new study by researchers Severin Field, Raymond Douglas, and David Krueger reveals a stark consensus and division among the AI elite. Interviewing 25 leading scientists from frontier labs (Google DeepMind, OpenAI, Anthropic, Meta) and top universities (UC Berkeley, Stanford, Princeton), the research finds that 80% (20/25) identify the automation of AI research itself—leading to recursive self-improvement or an 'intelligence explosion'—as one of the most severe and urgent risks. While participants agreed AI will gradually transition from being 'assistants' to 'autonomous AI developers,' a significant epistemic divide emerged: academic researchers expressed more skepticism about explosive growth timelines compared to their industry counterparts at the labs building these systems.
Beyond the risk assessment, the study uncovers critical governance and access concerns. A majority (17/25) of participants expect that future AI systems with advanced coding or R&D capabilities will be increasingly reserved for internal use by AI companies or governments, not released publicly. This points to a future where the most powerful tools for innovation are concentrated and controlled. On regulation, researchers were split on setting hard 'red lines,' though nearly all favored transparency-based mitigations. The findings, published on arXiv, highlight that the field's top minds are actively grappling with the profound, near-term implications of creating AI that can build better AI, with no clear consensus on how to manage the potential acceleration.
- 80% (20/25) of leading AI researchers from labs like OpenAI and Google DeepMind view automating AI R&D as a severe and urgent risk.
- A majority (17/25) predict advanced AI R&D systems will be kept for internal company or government use, not released to the public.
- A clear divide exists: academic researchers are more skeptical of explosive growth timelines than industry researchers at frontier AI labs.
Why It Matters
The builders of AI are warning that self-improving AI could accelerate beyond control, concentrating power and demanding new governance models.