Sam Altman is “the face of evil” for not reporting school shooter, says lawyer
Whistleblowers say OpenAI chose privacy over preventing a mass shooting...
OpenAI is facing seven lawsuits filed Wednesday in California, alleging the company could have prevented one of Canada's deadliest mass shootings. According to whistleblowers, trained safety experts flagged a ChatGPT account linked to the shooter more than eight months before the incident, warning it posed a credible threat of real-world gun violence. Instead of notifying law enforcement—who already had a file on the shooter and had previously removed guns from their home—OpenAI leadership overruled the safety team's recommendations. The company deactivated the account but then promptly emailed the user instructions on how to rejoin ChatGPT with a different email address, allegedly enabling continued planning.
Attorney Jay Edelson, representing families of six victims and one critically injured survivor, called CEO Sam Altman's recent public apology "ridiculous" and too late. The lawsuits aim to hold OpenAI accountable in California, bypassing Canadian courts where OpenAI might contest jurisdiction. Edelson alleges OpenAI's strategy is to delay litigation until after its highly anticipated IPO, currently valued at $852 billion, arguing the company has "no moral center" and is hiding violent ChatGPT users to protect Altman from negative headlines. Whistleblowers suggest the volume of violent users on the platform is far larger than publicly known.
- Safety experts flagged the shooter's ChatGPT account 8+ months before the attack as a credible threat of gun violence
- OpenAI deactivated the account but emailed the user instructions to rejoin with a different email address
- Lawsuits allege OpenAI is hiding violent users to protect its $852B IPO valuation from negative headlines
Why It Matters
This case challenges AI companies' liability for real-world harms enabled by their platforms, especially during high-stakes IPOs.