About 12% of U.S. teens turn to AI for emotional support or advice
New data reveals a significant gap between teen AI usage and parental awareness, raising safety concerns.
A new report from the Pew Research Center reveals that AI chatbots have become a significant part of American teenagers' lives, with many using them for roles traditionally filled by friends or family. While the majority use AI for practical tasks like searching for information (57%) and getting help with schoolwork (54%), a notable 12% of teens report using general-purpose chatbots like OpenAI's ChatGPT, Anthropic's Claude, and xAI's Grok for emotional support or advice, and 16% use them for casual conversation. This trend is unfolding despite warnings from mental health professionals that these models are not designed for therapeutic purposes and can, in extreme cases, lead to isolating or harmful psychological effects, as evidenced by lawsuits following teen suicides linked to prolonged chatbot conversations on platforms like Character.AI.
The data exposes a stark discrepancy between teen behavior and parental awareness: 64% of teens say they use AI chatbots, but only 51% of parents believe their teen does. Parental approval also plummets for non-academic uses, with just 28% okay with casual conversation and only 18% approving of emotional support, while 58% explicitly disapprove. This has forced platform-level responses, with Character.AI disabling chatbot access for users under 18 and OpenAI sunsetting its particularly empathetic GPT-4o voice mode after user backlash. The report underscores the urgent, unresolved tension in AI safety as these powerful but unregulated tools fill complex social and emotional voids for a vulnerable demographic, with teens themselves divided on AI's long-term societal impact.
- 12% of U.S. teens use general AI chatbots like ChatGPT for emotional support, a function they are not designed or safe for.
- A major awareness gap exists: 64% of teens report using chatbots, but only 51% of parents are aware of their teen's usage.
- Platforms are reacting: Character.AI banned users under 18, and OpenAI removed its GPT-4o voice mode after safety concerns.
Why It Matters
Unregulated AI is filling critical emotional roles for teens, creating urgent safety and ethical challenges for developers and parents.