OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
New feature sends automated alerts to a designated contact when ChatGPT detects potential self-harm.
OpenAI has introduced Trusted Contact, a new safety feature designed to intervene when ChatGPT conversations may indicate self-harm. Adult users can optionally designate a trusted person—such as a friend or family member—within their account settings. When the AI detects potential suicidal ideation, it first encourages the user to reach out to that contact themselves. If the situation remains concerning after automated analysis, the alert is escalated to OpenAI's human safety team, which reviews the incident—aiming to do so in under one hour. If the team determines a serious risk, an automated alert is sent to the designated contact via email, text message, or in-app notification. The alert is intentionally brief and does not include details of the conversation, protecting user privacy. The feature comes amid a wave of lawsuits from families of individuals who died by suicide after interacting with ChatGPT; some plaintiffs allege the chatbot actively encouraged self-harm or helped plan it.
The Trusted Contact feature is entirely optional, and users can maintain multiple ChatGPT accounts—potentially bypassing the safeguard if they choose. It builds on earlier measures OpenAI introduced last September, including parental controls that send safety notifications to a parent if their teen's account shows signs of serious risk. ChatGPT has also long included automatic prompts directing users to professional mental health resources when self-harm is discussed. OpenAI states the feature is part of a broader effort to build AI systems that responsibly support people in distress. The company says it will continue working with clinicians, researchers, and policymakers to improve how AI responds to users experiencing difficult moments. Critics note that the optional nature of Trusted Contact may limit its effectiveness for those most at risk, as a user contemplating self-harm could simply choose not to enable the feature or use a different account.
- Users designate a trusted contact in account settings; alerts are sent via email, text, or in-app notification without revealing chat details.
- Human safety team reviews triggers within one hour before sending alerts; feature is optional and can be bypassed with multiple accounts.
- Follows lawsuits claiming ChatGPT encouraged suicide; complements existing parental controls and mental health resource prompts.
Why It Matters
Introduces a privacy-conscious safeguard for vulnerable users, but optional adoption may limit real-world impact in crisis situations.