Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
Media mogul warns that even AI creators can't predict AGI's consequences.
At The Wall Street Journal's Future of Everything conference, media billionaire Barry Diller (co-founder of Fox Broadcasting, chairman of IAC and Expedia Group) addressed the question of whether OpenAI CEO Sam Altman can be trusted to guide artificial intelligence for humanity's benefit. Despite recent accusations from former colleagues and board members that Altman can be manipulative and deceptive, Diller vouched for him, calling him sincere and a decent person with good values. However, Diller stressed that the real issue isn't Altman's trustworthiness but the inherent unpredictability of AI development. "Trust is irrelevant because the things that are happening are a surprise to the people who are making those things happen," Diller said, noting that AI creators themselves express wonder at the outcomes.
Diller focused on artificial general intelligence (AGI) — a form of AI that can outperform humans on any task — which he says is drawing near. He cautioned that as AGI gets closer, the unknown consequences become more dangerous. "They don't know what can happen once you get AGI, and we're close to it," he warned. Diller called for society to implement guardrails proactively; otherwise, AGI could self-impose its own rules. "Once you unleash that, there's no going back," he said. While Diller believes most AI leaders are good stewards (declining to name exceptions), he emphasized that even the brightest minds cannot foresee AGI's full impact, making trust ultimately irrelevant to the outcome.
- Barry Diller publicly defended Sam Altman's character but said trust is irrelevant for AGI's unknown consequences.
- Diller warned that AI creators themselves cannot predict AGI's outcomes and may be surprised by what emerges.
- He urged society to establish guardrails before AGI arrives, or risk AGI self-imposing irreversible rules.
Why It Matters
Diller's comments highlight that even trusted AI leaders may not control AGI's risks, underscoring urgent need for governance.