Startups & Funding

A roadmap for AI, if anyone will listen

A bipartisan coalition of hundreds of experts proposes mandatory off-switches and a ban on self-replicating AI.

Deep Dive

A bipartisan coalition of hundreds of AI experts, former officials, and public figures, organized by MIT physicist Max Tegmark, has released the Pro-Human Declaration. This framework outlines a responsible path for AI development, starkly contrasting with the current unregulated landscape highlighted by recent disputes between AI firms like Anthropic and the Pentagon. The declaration identifies humanity at a fork in the road: one path leads to humans being replaced, while the other uses AI to massively expand human potential. It establishes five core pillars to guide the latter, including keeping humans in charge, avoiding the concentration of power, and holding AI companies legally accountable.

Among its most specific and muscular provisions are an outright prohibition on superintelligence development until scientific consensus on safety is achieved, mandatory off-switches on powerful systems, and a ban on AI architectures capable of self-replication or autonomous self-improvement. The declaration's urgency is underscored by recent events, such as the Pentagon labeling Anthropic a 'supply chain risk' after a contract dispute, revealing the costly vacuum of coherent federal AI regulation. Tegmark argues that public pressure, particularly around child safety, could be the catalyst for change, noting that the framework calls for mandatory pre-deployment testing of AI products aimed at younger users for risks like emotional manipulation and suicidal ideation.

Key Points
  • Proposes an outright ban on superintelligence development until scientific consensus on safety is reached and democratic buy-in is secured.
  • Mandates 'off-switches' for powerful AI systems and bans architectures capable of self-replication or resisting shutdown.
  • Calls for mandatory pre-deployment testing, especially for chatbots and companion apps targeting children, to assess risks like emotional manipulation.

Why It Matters

It provides a concrete, expert-backed policy framework to regulate powerful AI, aiming to prevent catastrophic risks and protect society from unaccountable systems.