CoMAI: A Collaborative Multi-Agent Framework for Robust and Equitable Interview Evaluation
New modular framework uses four specialized AI agents to reduce bias and improve security in hiring evaluations.
A research team led by Gengxin Sun, Ruihao Yu, and colleagues has introduced CoMAI, a novel multi-agent framework designed to address the persistent challenges of bias and security in AI-powered interview systems. Unlike traditional single-agent LLM approaches, CoMAI employs a modular architecture where four specialized AI agents—handling question generation, security, scoring, and summarization—work in concert under a centralized finite-state machine. This design enables multi-layered security defenses against prompt injection attacks and supports adaptive difficulty adjustment during assessments.
Experimental results from the paper, published on arXiv, demonstrate CoMAI's effectiveness. The framework achieved 90.47% accuracy and 83.33% recall in evaluation tasks, while also garnering an 84.41% candidate satisfaction rate. These metrics highlight its potential as a more robust and interpretable alternative to monolithic systems. The rubric-based structured scoring mechanism is a key innovation, explicitly designed to reduce subjective bias and promote fairness in automated hiring evaluations.
- Uses four specialized AI agents (question, security, scoring, summary) coordinated via a finite-state machine for modular assessment
- Achieved 90.47% accuracy and 84.41% candidate satisfaction in experiments, outperforming single-agent LLM systems
- Implements multi-layered security against prompt injection and rubric-based scoring to reduce subjective bias in hiring
Why It Matters
Provides a more secure, transparent, and equitable framework for automated hiring, potentially reducing human bias in critical employment decisions.