Media & Culture

What 81,000 people want from AI \ Anthropic

Massive user study shows top priorities: AI should be helpful, honest, and harmless above all else.

Deep Dive

Anthropic, the AI safety company behind Claude, conducted a massive survey of 81,000 people to directly ask what they value most in AI systems. The clear winner was a trio of principles: helpful, honest, and harmless (HHH). Among these, being 'helpful'—providing accurate, useful, and actionable information—was ranked as the single most important trait by users. This large-scale, direct feedback is a cornerstone of Anthropic's 'Constitutional AI' training methodology, which uses a set of principles to guide model behavior.

The survey results provide a crucial, data-driven counterpoint to purely capability-focused benchmarks. While raw performance on tasks like coding or math is important, users fundamentally want AI they can trust to be beneficial and not cause harm. These preferences directly shape how Anthropic develops and refines models like Claude 3.5 Sonnet, ensuring alignment with human values from the ground up. The findings suggest the future of AI development must balance advanced functionality with these core safety and reliability tenets.

Key Points
  • 81,000 users ranked 'helpful, honest, harmless' (HHH) as top AI priorities.
  • The principle of being 'helpful' was valued above 'honest' and 'harmless'.
  • Data directly feeds into Anthropic's Constitutional AI approach for model training.

Why It Matters

This user-driven data shapes how safe, enterprise-grade AI like Claude is built, prioritizing trust and utility.