Research & Papers

ANDRE: An Attention-based Neuro-symbolic Differentiable Rule Extractor

This new framework replaces fuzzy logic with differentiable attention for stable rule extraction.

Deep Dive

Inductive Logic Programming (ILP) has long struggled to scale to noisy, probabilistic settings due to brittle discrete search and inaccurate fuzzy operators. The new paper from Sharifi et al. introduces ANDRE (Attention-based Neuro-symbolic Differentiable Rule Extractor), which replaces both rule templates and traditional logical operators with fully differentiable, attention-driven conjunction and disjunction modules. These approximate logical min-max semantics while preserving interpretability, enabling stable gradient flow and accurate reasoning over uncertain predicate valuations. By softly selecting, negating, or excluding predicates within rules, ANDRE supports flexible rule induction without sacrificing symbolic structure.

Extensive experiments across classical ILP benchmarks, large-scale knowledge bases, and synthetic datasets with probabilistic predicates and noisy supervision demonstrate ANDRE's advantages. It achieves competitive or superior predictive performance while reliably recovering correct symbolic rules under uncertainty. Notably, ANDRE remains robust to moderate label noise, substantially outperforming existing differentiable ILP methods in both rule extraction quality and stability. This approach bridges the gap between symbolic reasoning and deep learning, offering a scalable path to interpretable AI in domains where data is inherently probabilistic or noisy.

Key Points
  • Replaces both rule templates and fuzzy logical operators with attention-driven conjunction/disjunction for stable differentiable reasoning.
  • Achieves competitive or superior predictive performance on 35-page benchmark study (8 figures, 10 tables) covering classical ILP and large-scale KBs.
  • Remains robust to moderate label noise, outperforming existing differentiable ILP methods in rule extraction quality and stability.

Why It Matters

Enables trustworthy AI by extracting interpretable rules from noisy real-world data, crucial for high-stakes decision-making in healthcare, finance, and autonomous systems.