Enterprise & Industry

OpenAI flagged Canadian school shooter months before massacre but did not alert police

ChatGPT misuse for 'violent intent' was detected in June 2025, but OpenAI didn't alert police.

Deep Dive

OpenAI revealed it identified and banned the ChatGPT account of Jesse Van Rootselaar in June 2025 for policy violations related to 'furtherance of violent activities.' The company's internal review considered a referral to Canada's Royal Canadian Mounted Police but concluded the activity did not meet its high threshold for law enforcement, which requires an 'imminent and credible risk of serious physical harm.' Months later, in February 2026, the 18-year-old committed a mass shooting at Tumbler Ridge Secondary School in British Columbia, killing eight before dying by suicide. Following the tragedy, OpenAI proactively contacted the RCMP with the user's information. This incident, first reported by The Wall Street Journal, raises critical questions about AI companies' detection protocols and their responsibility to act on potential threats.

Key Points
  • OpenAI detected and banned the shooter's account for 'violent intent' in June 2025, months before the February 2026 attack.
  • The company's policy requires an 'imminent and credible risk' for police referral, a threshold it determined was not met at the time.
  • After the shooting, OpenAI provided the user's ChatGPT activity data to the Royal Canadian Mounted Police to aid their investigation.

Why It Matters

This case forces a major ethical and legal reckoning for AI companies on when and how to act on detected threats.