Viral Wire

US Government to Review New AI Models from Microsoft, Google DeepMind, and xAI Pre-Release

Trump administration expands AI oversight, requiring companies to share unsafe versions of new models.

Deep Dive

The US government has broadened a national security review program that compels AI developers—including Microsoft, Google DeepMind, and xAI—to grant federal scientists early access to their newest models before public release. Under the expanded agreement, companies must provide not only the final versions but also versions with deliberately reduced safety measures to help evaluators understand worst-case capabilities and vulnerabilities. This pre-release access allows government researchers to probe for risks such as misuse in cyberattacks, biological weapons design, or disinformation campaigns.

Spearheaded by the Trump administration, the initiative signals a more active federal role in AI governance, moving beyond voluntary commitments to mandated access. While the companies retain control over final deployment decisions, the reviews could lead to recommendations for additional safeguards or even deployment delays. The program targets frontier AI systems—the most powerful models—and aims to ensure that national security considerations are baked in before these technologies reach the public.

Key Points
  • The program covers unreleased models from Microsoft, Google DeepMind, and xAI for national security review.
  • Companies must provide 'red-teamed' versions with reduced safeguards to assess worst-case risks.
  • The Trump administration is driving this shift toward mandatory pre-release government oversight of AI.

Why It Matters

Federal pre-release review of frontier AI could set a new precedent for global AI safety regulation.