AI Safety

Who I Follow

A leading AI strategist reveals his curated list of essential thinkers, from Zvi's prolific analysis to the AI Futures Project's rigorous predictions.

Deep Dive

In a detailed guide titled "Who I Follow," prominent AI strategist Against Moloch has distilled his daily research routine into a curated list of the 10 most valuable sources for understanding AI safety, capabilities, and strategic forecasting. The list is designed for professionals who need substantive analysis without spending hours sifting through noise. It emphasizes thinkers who combine deep technical knowledge with rigorous reasoning about AI's societal impact and future trajectory.

Topping the list is Zvi Mowshowitz, whose Substack "Don't Worry About the Vase" is praised for its comprehensive coverage and opinionated insight, despite a staggering output of roughly 97,000 words in the first half of April alone. Other critical recommendations include the AI Futures Project for its epistemically rigorous predictions (like the influential AI-2027 scenario), Anthropic Institute's Jack Clark for his weekly deep-dive newsletter "Import AI," and researcher Ryan Greenblatt for his technical analysis of AI capabilities. The guide also highlights sources for political strategy, like Anton Leicht's "Threading the Needle," and long-form interviews, such as the Dwarkesh Patel podcast.

The selection criteria focus on sources that provide not just news, but frameworks for understanding progress, risk, and governance. Against Moloch, who spends several hours daily tracking AI developments, positions this list as a lifeline for professionals, investors, and policymakers who need to make informed decisions in a rapidly evolving field. The guide serves as a meta-curation tool, pointing to the analysts who are themselves doing the hard work of synthesis and prediction.

Key Points
  • Zvi Mowshowitz's Substack "Don't Worry About the Vase" is the top pick for comprehensive coverage, producing the equivalent of a novel (97k words) in two weeks.
  • The AI Futures Project is highlighted for its "epistemically rigorous" predictions, most notably its detailed AI-2027 scenario forecasting the next few years of AI development.
  • The list spans technical analysis (Ryan Greenblatt), strategic governance (Dean Ball's Hyperdimensional), political currents (Anton Leicht), and long-form interviews (Dwarkesh Patel) for a 360-degree view.

Why It Matters

For professionals in tech and policy, this curated list is a vital filter for high-signal analysis on AI's risks, capabilities, and future, saving countless hours of research.