The Latest AI Documentary Asks: Just How Scared Should We Be?
Oscar-winning director's new film interviews AI CEOs but finds them 'skating by on glib answers' about responsibility.
Oscar-winning director Daniel Roher's new documentary, 'The AI Doc: Or How I Became an Apocaloptimist,' secures rare interviews with AI industry leaders including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and DeepMind CEO Demis Hassabis. However, the film critiques these executives for offering familiar, glib answers on the profound responsibilities they hold. When pressed on public trust, Altman's blunt reply is simply, 'You shouldn't,' a line of questioning the film notes ends abruptly. Framed by Roher's personal anxiety about bringing a child into a world shaped by AI, the documentary serves as an accessible primer on the technology's potential and perils, featuring plain-language explanations and creative visuals influenced by producer Daniel Kwan.
The film presents a stark dichotomy, featuring dire warnings from figures like Tristan Harris of the Center for Humane Technology, who suggests some AI researchers don't expect their children to make it to high school due to societal disruption. This is contrasted against the techno-optimistic promises from Silicon Valley about curing disease and solving climate change. Despite accurately portraying the unregulated 'gold rush' and concentration of power, the documentary is criticized for ultimately adopting a 'both-sides' stance that places the burden of steering AI's future on the public, not the powerful CEOs it interviews. This is noted as a strange pivot given Roher's own previous critiques of the AI economy as a 'Ponzi scheme.'
- Features CEOs Sam Altman (OpenAI), Dario Amodei (Anthropic), and Demis Hassabis (DeepMind), but finds their answers on safety and responsibility lacking.
- Presents extreme risks, with one expert saying some AI researchers fear for their children's future, against promises of curing disease.
- Critiqued for a 'both-sides' conclusion that tasks the public, not industry leaders, with guiding AI's future, despite highlighting concentrated power.
Why It Matters
Highlights the critical gap between AI leaders' vast influence and their vague public accountability, framing a central dilemma for policymakers and society.