XMENTOR: A Rank-Aware Aggregation Approach for Human-Centered Explainable AI in Just-in-Time Software Defect Prediction
New rank-aware method aggregates LIME and SHAP outputs to cut confusion in AI-powered defect prediction.
A research team led by Saumendu Roy has published XMENTOR, a novel rank-aware aggregation framework designed to solve a critical usability problem in AI-assisted software engineering. While ML models for defect prediction can improve code quality, developers struggle to trust them due to opaque reasoning. Existing post-hoc explainable AI (XAI) methods like LIME and SHAP often provide conflicting explanations for the same prediction, increasing confusion and cognitive load. XMENTOR directly addresses this by unifying multiple explanation outputs into a single, clear view, aiming to enhance interpretability and trust directly within the developer's workflow.
The technical approach involves implementing XMENTOR as a Visual Studio Code plugin that applies adaptive thresholding, rank and sign agreement, and fallback strategies to merge explanations from different XAI tools. This human-centered design prioritizes clarity without overwhelming the user. The key validation comes from a user study where nearly 90% of participating developers preferred the aggregated explanations over individual, often contradictory, ones. The findings demonstrate that intelligently combining XAI outputs and embedding them into tools like VS Code can significantly boost the practical usability of AI for critical daily tasks like debugging and code review, moving beyond raw accuracy to foster genuine developer trust.
- Unifies conflicting outputs from XAI tools like LIME and SHAP into a single view using rank-aware aggregation.
- Implemented as a VS Code plugin, embedding explanations directly into the developer's workflow for just-in-time defect prediction.
- User study showed nearly 90% developer preference for aggregated explanations, citing reduced confusion and better debugging support.
Why It Matters
It tackles the core adoption barrier for AI in software engineering—developer trust—by making model reasoning coherent and actionable.