Research & Papers

Young people's perceptions and recommendations for conversational generative artificial intelligence in youth mental health

A new study with 32 young people outlines critical requirements for safe, effective mental health AI agents.

Deep Dive

A research team from Australian institutions, including Adam Poulsen and Ian B. Hickie, has published a pivotal study on young people's perspectives for integrating conversational generative AI into youth mental health. The work centered on the Mental health Intelligence Agent (Mia), a chatbot originally designed for professionals. Through co-design workshops with 32 young people, the study moved beyond theoretical benefits to gather concrete, youth-driven requirements for reconceptualizing such tools for direct consumer use and service integration.

The analysis crystallized into four essential themes that must guide development: 'Humanising AI without dehumanising care,' ensuring technology complements rather than replaces human connection; 'I need to know what's under the hood,' demanding transparency about how the AI works; 'Right tool, right place, right time?,' questioning appropriate use cases; and 'Making it mine on safe ground,' emphasizing the need for personalization within rigorously safe parameters. These findings directly inform the ethics, design, and governance of future mental health AI agents.

This research is significant because it shifts the conversation from what AI *can* do to what young users *need* it to do. By prioritizing co-design, the study provides a practical framework for developers and health services. It argues that for genAI chatbots like Mia to be trusted and effective, they must be built with these youth-validated principles at their core, balancing innovative support with critical safeguards.

Key Points
  • Study involved 32 young people in co-design workshops for the 'Mia' mental health AI chatbot.
  • Identified four critical design themes including transparency ('under the hood') and safe personalization.
  • Provides a direct blueprint for the ethical development and service integration of consumer-facing mental health AI.

Why It Matters

Offers a user-centered framework to build effective, trusted AI mental health tools that young people will actually use.