AI Safety

AI and Suicide Prevention: A Cross-Sector Primer

Millions rely on chatbots for crisis support—yet no clinical standards exist.

Deep Dive

A new cross-sector primer from the Partnership on AI, developed alongside a multistakeholder workshop in 2026, tackles the urgent gap between AI chatbots’ widespread use as mental health support tools and the absence of clinical validation, shared standards, and coordinated oversight. The paper—authored by Emily Saltz and Claire R. Leibowicz—begins with an overview of clinical best practices in suicide prevention, then examines how frontier AI systems (as of winter 2026) currently detect and respond to queries involving suicide and non-suicidal self-injury (NSSI). The workshop brought together AI labs, mental health practitioners, individuals with lived experience, and policymakers to establish a common reference point for the field.

The primer maps challenges across three layers: model, product, and policy. It draws on clinical literature, publicly available AI lab policies, and emerging evaluation frameworks to pinpoint where general-purpose chatbots fail in crisis contexts. Key issues include inconsistent responses to explicit self-harm language, lack of transparent safety guardrails, and no standardized crisis escalation protocols. The authors ultimately highlight urgent and achievable areas for cross-industry alignment—such as developing shared benchmarks for crisis detection and integrating verified mental health resources directly into chatbot responses—to better prevent suicide and promote overall well-being.

Key Points
  • AI chatbots already act as de facto mental health support for millions, yet lack clinical validation or shared standards.
  • The primer maps how frontier AI systems (as of winter 2026) detect and respond to suicide and NSSI queries across model, product, and policy layers.
  • Multistakeholder workshop convened AI labs, mental health practitioners, lived-experience advocates, and policymakers to identify actionable alignment priorities.

Why It Matters

As AI chatbots become frontline crisis tools, this primer offers a blueprint for safe, standardized mental health AI.