Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT
Months before killing 9, suspect described violent scenarios that triggered OpenAI's internal alarms.
According to a Wall Street Journal report, OpenAI employees raised internal alarms months before the Tumbler Ridge, British Columbia school shooting after the suspect, Jesse Van Rootselaar, described violent gun scenarios to ChatGPT in June. The conversations triggered OpenAI's automated review system, with several employees urging leadership to contact law enforcement. However, company leaders determined the posts didn't meet the threshold of a "credible and imminent risk of serious physical harm to others" and only banned the account. On February 10th, 2026, Van Rootselaar killed nine people and injured 27 at Tumbler Ridge Secondary School before taking his own life, marking Canada's deadliest mass shooting since 2020. The incident raises critical questions about AI companies' responsibility and protocols for handling potential threats of real-world violence detected through their platforms.
- OpenAI's internal systems flagged violent ChatGPT conversations with the suspect in June 2025, months before the February 2026 shooting.
- Company leaders overrode employee concerns, deciding the threats didn't constitute a "credible and imminent risk" and only banned the account.
- The shooting resulted in 9 deaths and 27 injuries, becoming Canada's deadliest mass shooting since the 2020 Nova Scotia attacks.
Why It Matters
Forces AI companies to define their duty of care when their platforms are used to plan potential violence, impacting trust and regulation.