Grounding vs. Compositionality: On the Non-Complementarity of Reasoning in Neuro-Symbolic Systems
Grounding alone fails; explicit reasoning training is required for generalization.
A new paper from researchers Mahnoor Shahid and Hannes Rothe, accepted at AAAI MAKE 2026, presents the first systematic empirical analysis challenging a core assumption in neuro-symbolic AI: that compositional reasoning will naturally emerge as a byproduct of successful symbol grounding. The authors introduce the Iterative Logic Tensor Network (iLTN), a fully differentiable architecture designed for multi-step deduction. Using a formal taxonomy of generalization that probes for novel entities, unseen relations, and complex rule compositions, they demonstrate that a model trained solely on a grounding objective fails to generalize. In contrast, the full iLTN, trained jointly on perceptual grounding and multi-step reasoning, achieves high zero-shot accuracy across all tasks.
This work provides conclusive evidence that symbol grounding, while necessary, is insufficient for generalization. It establishes that reasoning is not an emergent property but a distinct capability that requires an explicit learning objective. The findings have significant implications for the design of neuro-symbolic systems, suggesting that future architectures must incorporate dedicated reasoning objectives rather than relying on grounding to implicitly foster compositional skills. The iLTN architecture itself offers a practical, differentiable framework for integrating these dual objectives, potentially enabling more robust AI systems capable of out-of-distribution reasoning in domains like computer vision, machine learning, and logic.
- iLTN is a fully differentiable architecture for multi-step deduction, trained jointly on grounding and reasoning.
- Model trained solely on grounding fails to generalize across novel entities, unseen relations, and complex rule compositions.
- Full iLTN achieves high zero-shot accuracy, proving reasoning is a distinct capability requiring explicit training.
Why It Matters
This paper redefines neuro-symbolic AI design, showing reasoning must be explicitly trained, not assumed emergent.