Research & Papers

Regional Explanations: Bridging Local and Global Variable Importance

New research proves popular AI explanation tools LIME and SHAP give misleading results, even under ideal conditions.

Deep Dive

A new paper accepted at NeurIPS 2025 delivers a significant critique of two cornerstone methods in AI explainability. Researchers Salim I. Amoukou and Nicolas J-B. Brunel provide a formal analysis showing that Local Shapley Values (often implemented as SHAP) and LIME (Local Interpretable Model-agnostic Explanations) suffer from fundamental limitations. Even with exact computations and independent features, these popular tools can incorrectly assign importance to features that have no actual influence on a model's specific prediction, violating a core principle of sound attribution.

To solve this problem, the authors introduce R-LOCO (Regional Leave Out COvariates). This new method bridges the gap between local and global explanation techniques. Instead of analyzing a single prediction in isolation, R-LOCO first segments the entire input space into regions where features have similar importance characteristics. It then applies robust global attribution methods within each region. An individual instance's feature contributions are derived from its membership in a specific region, combining the stability of global methods with the instance-specific detail required for local explanations.

This approach directly addresses the identified instability and inaccuracy of purely local methods. By moving to a regional framework, R-LOCO avoids the pitfalls of analyzing noisy, single-point estimates and provides more faithful attributions. The work challenges the AI community to re-evaluate reliance on current local explanation tools and offers a principled, hybrid path forward for understanding complex model behavior.

Key Points
  • Proves Local SHAP and LIME assign importance to irrelevant features, even with perfect computation and independent data.
  • Introduces R-LOCO, a method that groups similar inputs into regions before calculating stable, global-style feature importance.
  • Accepted at the top-tier NeurIPS 2025 conference, signaling major impact for ML practitioners and regulators relying on model explanations.

Why It Matters

Trust in AI decisions hinges on accurate explanations; flawed tools like SHAP and LIME undermine model auditing, compliance, and debugging.