Sam Altman says AI superintelligence is so big that we need a "New Deal." Critics say OpenAI’s policy ideas are a cover for "regulatory nihilism"
OpenAI's 13-page policy paper proposes radical societal changes, released amid questions about Altman's trustworthiness.
OpenAI has published a 13-page policy paper titled 'Industrial Policy for the Intelligence Age,' calling for a societal 'New Deal' to prepare for the era of superintelligence—AI systems that outperform the smartest humans. The document proposes radical overhauls, including restructuring tax systems and redefining the standard workday, aiming to 'kick-start' a conversation on people-first policies for a future dominated by advanced AI.
However, the paper's release was immediately clouded by a separate, lengthy investigation from The New Yorker, published the same day, which raises serious questions about CEO Sam Altman's trustworthiness, particularly regarding AI safety commitments. This timing has fueled criticism from experts who argue OpenAI's broad, society-level policy ideas are a form of 'regulatory nihilism'—a strategy to deflect from calls for immediate, concrete regulation of the company's own powerful AI models like GPT-4. The controversy highlights the growing tension between AI labs advocating for long-term, speculative governance and critics demanding accountable, near-term oversight.
- OpenAI's 13-page 'Industrial Policy' paper proposes a 'New Deal' involving tax and workday reforms for the AI age.
- The proposal was released alongside a New Yorker investigation questioning CEO Sam Altman's trustworthiness on AI safety.
- Critics label the broad policy ideas as 'regulatory nihilism,' a cover to avoid specific, binding regulations on AI development.
Why It Matters
This debate shapes whether AI governance focuses on speculative future risks or accountable, present-day regulation of powerful models.