AI Safety

Americans For Moskovitz

A new political movement aims to draft Dustin Moskovitz for the 2028 election, citing a critical AI safety leadership gap.

Deep Dive

A detailed post on the rationality forum LessWrong, titled 'Americans For Moskovitz,' is sparking discussion in AI governance circles. Author Oliver Kuperman frames the 2028 U.S. presidential election as uniquely consequential for managing the risks of artificial general intelligence (AGI), citing expert forecasts that give a 50% probability of AGI arrival by the early to mid-2030s. The post argues that the next president could serve through this pivotal period and influence the 2032 election, yet current political frontrunners show little commitment to AI safety policies aimed at mitigating existential risk (x-risk).

Kuperman critiques potential candidates from both parties: California Governor Gavin Newsom for vetoing an AI safety bill (SB 1047), Representative Alexandria Ocasio-Cortez for downplaying AI's transformative potential, and Senators J.D. Vance and Marco Rubio for dismissing or neglecting safety concerns. The proposed solution is a political draft movement for Dustin Moskovitz, the Facebook and Asana co-founder who also co-founded the effective altruist grantmaking organization Open Philanthropy. The argument posits that Moskovitz uniquely combines deep funding and advocacy within the AI safety community, a track record of large-scale entrepreneurship for political credibility, and a worldview that takes AI x-risk seriously.

The campaign has launched a website and petition to gauge support and 'test the feasibility' of a Moskovitz run. The effort represents a direct attempt by segments of the AI safety and effective altruism communities to influence the highest levels of U.S. political leadership ahead of a perceived short timeline to AGI. The post has generated significant engagement on LessWrong, with comments noting both the strategic rationale and technical issues like a broken link in the original article.

Key Points
  • The 2028 U.S. election is framed as critical for AI governance, with a 50% chance of AGI by early-mid 2030s giving the winner outsized influence.
  • The post critiques likely 2028 frontrunners (Newsom, Ocasio-Cortez, Vance, Rubio) for lacking a serious commitment to AI safety and x-risk mitigation.
  • The solution proposed is drafting Dustin Moskovitz, citing his EA/AI safety funding via Open Philanthropy and tech founder credibility as unique qualifications.

Why It Matters

Signals a growing push by AI safety advocates to directly enter electoral politics, aiming to shape policy before advanced AI systems arrive.