AI Safety

ReasonX: Declarative Reasoning on Explanations

Researchers' new system uses MILP and constraint logic to make AI decisions interpretable.

Deep Dive

A team of researchers including Laura State, Salvatore Ruggieri, and Franco Turini has introduced ReasonX, a novel explanation tool designed to address critical shortcomings in current eXplanation in AI (XAI) methods. The system tackles issues of insufficient abstraction, limited user interactivity, and poor integration of symbolic knowledge that plague existing approaches. ReasonX operates through expressions in a closed algebra of operators over theories of linear constraints, providing declarative and interactive explanations primarily for decision trees. These trees can either represent the actual ML models being analyzed or serve as global or local surrogate models for any black-box predictor, making the tool versatile across different AI architectures.

The technical architecture of ReasonX consists of two layers: a user-facing Python layer and a backend Constraint Logic Programming (CLP) layer that implements a meta-interpreter of the query algebra. The system's core innovation lies in leveraging Mixed-Integer Linear Programming (MILP) to reason over features of both factual and contrastive instances, allowing users to express background or common sense knowledge as linear constraints. This enables reasoning at multiple levels of abstraction, from fully specified examples to under-specified or partially constrained scenarios. The researchers have demonstrated ReasonX's capabilities through qualitative examples and quantitative experiments comparing it to other XAI tools, showing how it moves beyond static explanations toward interactive, knowledge-augmented interpretability.

Key Points
  • Uses Mixed-Integer Linear Programming (MILP) to reason over factual and contrastive instances
  • Two-layer architecture with Python interface and Constraint Logic Programming backend
  • Allows integration of user background knowledge as linear constraints for multi-level abstraction

Why It Matters

Enables regulatory compliance and trust in high-stakes AI applications by making black-box models interpretable.