Strategic Costs of Perceived Bias in Fair Selection
A new game theory model reveals how AI-powered career tools can unintentionally widen socioeconomic gaps.
A team of researchers including L. Elisa Celis and Nisheeth K. Vishnoi has published a groundbreaking paper, 'Strategic Costs of Perceived Bias in Fair Selection,' accepted at NeurIPS 2025. Using a game-theoretic model, they analyze meritocratic systems like college admissions and corporate hiring. The core finding is a 'perception-driven bias': when candidates from different socioeconomic groups perceive different post-selection values (e.g., future salary or career success), they rationally invest different levels of effort. This effort directly translates to observable merit, meaning disparities emerge even when selection is perfectly fair and based solely on that merit.
The paper specifically highlights the role of modern 'techno-social environments,' including AI-powered tools that provide personalized career or salary guidance. These tools can inadvertently shape and solidify group-based perceptions of value. The researchers characterize the unique Nash equilibrium in their model and provide explicit formulas showing how valuation disparities and institutional selectivity jointly determine outcomes like effort, representation, and social welfare. They also propose a cost-sensitive optimization framework, offering institutions a way to quantify how modifying selectivity or working to change perceived values can reduce disparities without sacrificing their core goals. This work bridges rational-choice and structural explanations of inequality, showing how individual incentives are shaped by a broader feedback cycle linking technology, social context, and perceived rewards.
- The model identifies 'perception-driven bias,' where differing views of future value lead to rational effort gaps, propagating inequality.
- It explicitly factors in modern AI tools (e.g., personalized career guidance platforms) as shapers of these perceptions.
- The authors provide a quantitative framework to help institutions reduce disparities without compromising on merit-based selection goals.
Why It Matters
This research provides a crucial mathematical framework for understanding and mitigating bias in AI-driven hiring, admissions, and lending systems used by enterprises worldwide.