The Federal AI Policy Framework: An Improvement, But My Offer Is (Still Almost) Nothing
Blogger Zvi says the new 4-page federal AI policy outline offers 'almost nothing' on existential risks.
In a detailed post on LessWrong, AI commentator Zvi Mowshowitz dissected the newly released Federal AI Policy Framework, a four-page outline from the government. While acknowledging improvements like strong free speech protections against federal overreach and sensible child safety proposals, Zvi's central critique is stark: the framework offers "almost nothing" on the most critical issue—addressing frontier, catastrophic, or existential risks from advanced AI. He notes these risks are only mentioned in the context of vague "national security" concerns, with no transparency requirements or concrete policy substitutes for the state laws it seeks to override.
Zvi argues the document appears designed primarily to preempt state-level AI regulations like California's SB 53 and the proposed RAISE Act, replacing them with essentially no federal action on core safety issues. He states he could not support the framework as written because it overrides state laws in key areas without providing adequate federal safeguards. His conditional support hinges on the inclusion of explicit exceptions allowing states to pass laws addressing frontier AI risks. The analysis concludes that the framework represents a choice to largely ignore proactive AI policy, hoping existing law and court battles will suffice, which he views as an inadequate strategy for the AI era.
- The 4-page Federal AI Policy Framework includes strong free speech protections but lacks substance on catastrophic AI risks.
- Criticized as a move to preempt state laws (like SB 53) without offering substantive federal regulation in return.
- Zvi's support is conditional on adding exceptions for state laws addressing frontier AI risks and better implementation.
Why It Matters
Highlights the growing tension between federal preemption and the need for concrete policies to govern advanced, potentially dangerous AI systems.