Every promise Sam Altman broke — with receipts
From a $500B for-profit pivot to Pentagon contracts, here are the receipts on OpenAI's reversals.
A comprehensive viral report has cataloged eight significant public promises made by OpenAI CEO Sam Altman that have been directly contradicted by the company's subsequent actions, supported by documentary evidence. The central pivot is OpenAI's transformation from a 2015 non-profit charter 'unconstrained by financial return' to a $500B for-profit entity by 2025, with internal 2016 documents suggesting this was always the plan. Other major reversals include Altman's 2023 Senate testimony claiming he held 'no equity'—contradicted by reports of indirect stakes—and his shift from calling for 'critical' AI regulation in 2023 to warning of 'overregulation' in 2025. The report also highlights the dissolution of multiple safety teams, including the Superalignment team pledged 20% of compute, despite earlier commitments.
The technical and policy reversals carry substantial implications. OpenAI quietly deleted its explicit ban on 'military and warfare' applications in January 2024, leading to a full Pentagon deployment in February 2026—hours after Anthropic was blacklisted for refusing the same contract. This occurred alongside the removal of 'safely' from OpenAI's mission statement. Furthermore, the report details the enforcement of equity clawback NDAs bearing Altman's signature, which he later claimed ignorance of, forcing a safety researcher to forfeit 85% of his family's net worth to speak out. These documented shifts from open-source ideals and safety prioritization to profit-driven, militarized partnerships mark a fundamental reorientation of the company, raising critical questions about governance and accountability in leading AI labs as commercial and governmental pressures intensify.
- OpenAI completed its shift to a for-profit structure valued at $500B, contradicting its original 2015 non-profit charter and internal documents from 2016.
- The company deleted its 'military use' ban in 2024 and secured a Pentagon contract in 2026, directly after rival Anthropic was blacklisted for refusing.
- Three core safety teams were dissolved, including the Superalignment team promised 20% of compute, while equity clawback NDAs signed by Altman silenced critics.
Why It Matters
These reversals reveal a widening gap between AI safety rhetoric and corporate action, setting precedents for militarization and accountability.