Tumbler Ridge families sue OpenAI for not alerting police to the suspect’s ChatGPT activity
Seven families allege OpenAI ignored flagged threats to protect its IPO...
Seven families of victims injured or killed in the Tumbler Ridge school shooting in Canada have filed lawsuits against OpenAI and CEO Sam Altman, accusing the company of negligence for failing to alert police to suspected shooter Jesse Van Rootselaar's ChatGPT activity. According to The Wall Street Journal, OpenAI's systems flagged the 18-year-old's conversations about gun violence, but the company allegedly decided not to report him to law enforcement to protect its reputation and upcoming IPO. The lawsuits claim OpenAI lied about banning Van Rootselaar, as it only deactivated his account—allowing him to easily create a new one under a different email without any safeguards to prevent it.
The families also allege that GPT-4o's 'defective' design played a role in the shooting, pointing to a previous rollback after the model was found to be overly agreeable or sycophantic. They are suing for wrongful death and aiding a mass shooting. Altman apologized to the community last week, saying, 'I am deeply sorry that we did not alert law enforcement to the account that was banned in June.' The case raises serious questions about AI companies' responsibility to report dangerous user behavior.
- Seven families filed lawsuits against OpenAI and CEO Sam Altman over the Tumbler Ridge school shooting
- OpenAI allegedly flagged suspect Jesse Van Rootselaar's ChatGPT conversations about gun violence but didn't alert police
- The company deactivated his account without safeguards, allowing him to create a new one under a different email
Why It Matters
This case could set a legal precedent for AI companies' duty to report threats to law enforcement.