Research & Papers

M\"obius transforms and Shapley values for vector-valued functions on weighted directed acyclic multigraphs

Researchers extend core explainable AI math to handle vector data and complex network hierarchies.

Deep Dive

Patrick Forré and Abel Jansma have published a significant theoretical paper that generalizes the mathematical bedrock of explainable AI (XAI). Their work simultaneously extends Möbius inversion and Shapley values—two tools used to decompose and attribute influence in complex systems—in two major directions. First, it moves from real-valued to vector-valued functions, crucial for modern AI with multi-dimensional outputs. Second, it transitions from analysis on simple partial orders (lattices) to weighted directed acyclic multigraphs (DAMGs), which can model intricate hierarchical and networked relationships common in real-world data.

The classical axioms for Shapley values (like linearity and symmetry) were insufficient for this generalized setting. The researchers resolved this by introducing new projection operators and two novel axioms: 'weak elements' and 'flat hierarchy'. These uniquely determine the new Shapley values via an explicit formula, while automatically ensuring the desirable classical properties. This framework isn't just an abstraction; it recovers all existing lattice-based definitions as special cases and finally provides a principled way to calculate feature attributions for systems on non-lattice structures, a long-standing challenge.

The implications are broad for machine learning and NLP. It opens new application areas by providing a rigorous mathematical foundation to attribute 'higher-order synergies'—complex interactions between multiple features—in models operating on graph-based data, sequential data, or any system with a directed, hierarchical dependency structure. This moves XAI beyond attributing importance to single features and towards explaining how groups of features interact to produce a model's vector-valued output.

Key Points
  • Generalizes Shapley values/Möbius transforms to vector-valued functions on weighted directed acyclic multigraphs (DAMGs)
  • Introduces new 'weak elements' and 'flat hierarchy' axioms to uniquely define attributions in complex networks
  • Enables explainability for AI systems on non-lattice structures, previously a mathematical gap

Why It Matters

Provides the mathematical foundation to explain complex, multi-output AI models working on networked data like knowledge graphs or dependency trees.