Algorithmic Feature Highlighting for Human-AI Decision-Making
AI highlights the most important data points for better human decisions.
A new paper from Yifan Guo and Jann Spiess, published on arXiv, introduces a novel approach to human-AI decision-making called "algorithmic feature highlighting." Instead of providing a single prediction or recommendation, the algorithm selects a small subset of case-specific features to present to human decision-makers. This method tackles the problem of information overload in complex scenarios where many potentially relevant features exist but human bandwidth is limited. The authors model highlighting as a constrained information policy and explore how different types of users—sophisticated agents who correctly condition on the selection rule versus naive agents who treat the selection as exogenous—interpret these choices.
The study reveals significant computational challenges: optimizing highlighting for sophisticated agents is computationally intractable even in simple settings, while optimization for naive agents is tractable when bandwidth is fixed. Crucially, a policy optimal for sophisticated agents can perform arbitrarily poorly when deployed with naive users, underscoring the need for robust, implementable alternatives. The framework is illustrated with a calibrated empirical exercise based on the American Housing Survey, demonstrating its practical appeal. Overall, the research establishes the value of context-specific feature highlighting as a computationally feasible tool for achieving human-algorithm complementarity, offering a promising path for real-world applications in fields like healthcare, finance, and public policy.
- Algorithms highlight a small set of case-specific features to reduce information overload for human decision-makers.
- Optimizing for sophisticated agents is computationally intractable, while optimizing for naive agents is tractable with fixed bandwidth.
- Validated with American Housing Survey data, showing practical value for human-AI complementarity.
Why It Matters
This approach could revolutionize decision support in high-stakes fields by making AI more interpretable and usable.