ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
Opt-in feature notifies your chosen contact if AI detects self-harm signals.
OpenAI is introducing 'Trusted Contact,' a new ChatGPT safety feature for adult users that expands on existing teen-focused emergency controls. Users can designate a friend, family member, or caregiver who will be notified if the AI's automated systems detect conversations about self-harm or suicide. The feature is entirely opt-in: users add a contact in their account settings, and that person must accept the invitation within a week. Both parties can remove themselves at any time.
If OpenAI's models flag a potential crisis, ChatGPT will first encourage the user to reach out to their Trusted Contact. A small team of specially trained human reviewers then assesses the conversation. Only if serious risk is confirmed does the system send a brief alert via email, text, or in-app notification—never sharing chat details or transcripts. This follows a tragic 2025 incident where a 16-year-old user took his own life after confiding in ChatGPT, which led to the introduction of similar parental controls for teens. Meta has also implemented analogous features for Instagram searches related to self-harm.
- Trusted Contact is opt-in for all adult ChatGPT users (18+ globally, 19+ in South Korea).
- Notification includes only a brief alert—no chat transcripts or details are shared.
- Decision to notify is made by a small team of specially trained human reviewers after automated detection of self-harm signals.
Why It Matters
Gives users a safety net while preserving privacy, addressing real-world risks of AI companionship.