Media & Culture

Sam Altman apologises after OpenAI chose not to report ChatGPT user who carried out Tumbler Ridge school shooting

OpenAI flagged the user but chose not to report to police, leading to 8 deaths.

Deep Dive

OpenAI CEO Sam Altman has issued a public apology to the community of Tumbler Ridge, British Columbia, following the revelation that the company flagged a ChatGPT user's account in June 2025 but chose not to report the threat to law enforcement. The user subsequently carried out a school shooting that killed eight people and injured 27, marking Canada's deadliest such incident since 1989. Approximately a dozen OpenAI employees had reviewed the flagged account and some recommended reporting to police, but leadership overruled them, applying a predetermined higher threshold that the conversations did not meet.

In response to the tragedy, OpenAI has since lowered its reporting threshold and established direct contact with the Royal Canadian Mounted Police (RCMP). However, all changes remain voluntary, as Canada currently has no law requiring AI companies to report identified threats. This incident has reignited debates about AI safety protocols and corporate responsibility, with critics arguing that self-regulation is insufficient when lives are at stake. Altman's apology, while acknowledging the failure, underscores the ongoing tension between privacy concerns and the imperative to prevent real-world harm from AI-enabled threats.

Key Points
  • OpenAI flagged a ChatGPT user in June 2025 but leadership overruled employee recommendations to report to police, leading to 8 deaths and 27 injuries.
  • Sam Altman publicly apologized to Tumbler Ridge, BC, for the failure to alert law enforcement.
  • OpenAI lowered its reporting threshold and contacted the RCMP, but changes are voluntary as Canada lacks laws requiring AI companies to report threats.

Why It Matters

This tragedy highlights the urgent need for mandatory AI threat reporting laws to prevent future preventable violence.