AI Safety

Empowering Affected Individuals to Shape AI Fairness Assessments: Processes, Criteria, and Tools

A new study shows how to let ordinary people, not just experts, define fairness for AI systems.

Deep Dive

A new study argues that fairness in AI systems, like those used for credit scores, should be defined by the people they impact, not just by technical experts. In a study with 18 participants, people created their own concrete fairness rules using a prototype tool. The research found a diverse range of personal fairness criteria, showing that affected individuals can effectively shape how AI fairness is assessed and improved.

Why It Matters

This approach makes AI systems more accountable and fair by directly incorporating the values of the people they affect.