VERA-MH Concept Paper
A new automated system uses AI agents to role-play patients and judge chatbot responses for suicide risk.
A team of clinicians and researchers led by Luca Belli introduced VERA-MH (Validation of Ethical and Responsible AI in Mental Health), an automated safety evaluation for mental health chatbots. It uses two AI agents: one simulates users with predefined risk levels, and another judges the chatbot's responses against a clinical rubric. The team has already conducted preliminary tests on GPT-5 and Claude models to refine the system and is seeking community feedback for further validation.
Why It Matters
As AI chatbots are increasingly used for mental health support, automated, clinically-informed safety testing is critical to prevent harm.