Developer Tools

“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

A 100-source probe details alleged deceptions, clashing with OpenAI's new public safety pledges.

Deep Dive

On the same day OpenAI released a policy framework advocating for a safe, human-first future with superintelligent AI, The New Yorker published a damning investigation into CEO Sam Altman's trustworthiness. The report, based on over 100 interviews and a review of internal communications, paints Altman as a charismatic leader whose desire to please is matched by an alleged 'sociopathic lack of concern' for the consequences of deception. Former chief scientist Ilya Sutskever and former research head Dario Amodei documented an 'accumulation of alleged deceptions and manipulations,' concluding Altman was not creating a safe environment for advanced AI. Altman disputes or claims to have forgotten many incidents, attributing shifts to the evolving AI landscape.

The timing creates a jarring dissonance: OpenAI's public-facing documents promise transparency and risk mitigation for existential threats, while the investigation suggests foundational distrust in its leader. This scrutiny intensifies as public anxiety grows over AI's societal impact, from job displacement to energy use. OpenAI's new policy paper, which includes ideas like a public wealth fund, is seen by some as an effort to counter this negative perception. With government reliance on OpenAI's models increasing and regulatory pressure mounting, the credibility gap highlighted by The New Yorker could significantly hinder public and political buy-in for the company's vision, potentially influencing upcoming elections and data center policies.

Key Points
  • The New Yorker's investigation involved over 100 sources and internal memos, revealing a pattern of alleged deception by Altman.
  • Former executives Ilya Sutskever and Dario Amodei concluded Altman was not fostering a safe environment for advanced AI development.
  • The report drops as OpenAI releases major superintelligence governance proposals, creating a stark contrast between public promises and internal culture.

Why It Matters

Trust in leadership is critical as OpenAI seeks to define global AI policy and manage systems with profound societal risks.