AI Safety

Schmidt Sciences’ request for proposals on the Science of Trustworthy AI

Eric Schmidt's foundation seeks proposals to transform AI safety from 'alchemy' to science.

Deep Dive

Schmidt Sciences, the philanthropic foundation founded by former Google CEO Eric Schmidt, has issued a significant Request for Proposals (RFP) for its Science of Trustworthy AI program. The initiative aims to fund technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment. The RFP is grounded in a detailed research agenda that critiques current AI development as resembling 'alchemy more than a mature science,' where researchers add more data and compute while hoping desirable properties emerge. The program seeks to establish a more scientific foundation for AI safety, addressing core challenges like technical alignment—ensuring system behavior matches intended specifications—and the divergence between a model's effective behavioral goals and user intent.

The RFP outlines three connected research aims: characterizing and forecasting misalignment in frontier AI systems (Aim 1), developing generalizable measurements and interventions with decision-relevant validity (Aim 2), and overseeing AI systems with superhuman capabilities while addressing multi-agent risks (Aim 3). Funding is structured in two tiers: Tier 1 offers up to $1 million for 1-3 year projects, while Tier 2 provides $1 million to over $5 million for similarly ambitious timelines. Schmidt Sciences explicitly states it is 'most interested in ambitious Tier 2 proposals that, if successful, would change what the field believes is possible.' Proposals must be submitted via SurveyMonkey Apply, with the research agenda emphasizing questions around goal misalignment, underspecification, and developing oversight mechanisms for regimes where humans cannot directly evaluate AI correctness.

Key Points
  • Offers two funding tiers: Tier 1 (up to $1M) and Tier 2 ($1M-$5M+) for 1-3 year projects
  • Focuses on three core aims: characterizing misalignment, developing measurements/interventions, and overseeing superhuman AI
  • Critiques current AI development as 'alchemy' and seeks to establish a scientific foundation for AI safety

Why It Matters

Directly funds critical AI safety research that could prevent catastrophic misalignment in future superhuman systems.