Human, Algorithm, or Both? Gender Bias in Human-Augmented Recruiting
New research finds AI-only hiring tools are less fair than humans, but a combined approach yields the best results.
A new study titled "Human, Algorithm, or Both? Gender Bias in Human-Augmented Recruiting" provides one of the first empirical comparisons of fairness across different hiring processes. Conducted by researchers Mesut Kaya and Toine Bogers, the work analyzes data from a real-world recruitment platform to measure gender bias in candidate selection. The research directly compares three distinct scenarios: recruiters manually searching a CV database, an AI-driven system matching candidates to jobs, and a combined human-AI approach.
The findings challenge the assumption that AI automatically introduces more bias. While the AI-only solution did produce less fair candidate lists than human recruiters working alone, the most equitable outcomes emerged from a hybrid model. In this optimal process, recruiters first interacted with an AI-generated slate of recommended candidates before manually searching for additional prospects. This sequence had a "beneficial effect" on the gender fairness of candidates who were ultimately viewed, clicked, and contacted. The study concludes that deliberate human deliberation improves fairness and that human oversight is essential for developing more equitable, AI-augmented hiring practices.
- Human recruiters alone produced fairer candidate lists in terms of gender than an AI-only matching system.
- The hybrid human-AI process, where humans review AI suggestions first, yielded the fairest outcomes of all three methods tested.
- The research offers rare empirical evidence from a real platform, highlighting the need for human oversight to mitigate bias in algorithmic hiring.
Why It Matters
Provides data-driven guidance for companies to structure their hiring tech stacks to actively reduce bias, not amplify it.