Agent Frameworks

AGEL-Comp: A Neuro-Symbolic Framework for Compositional Generalization in Interactive Agents

A new hybrid framework combines causal graphs, logic, and LLMs for robust agent reasoning...

Deep Dive

Researchers Mahnoor Shahid and Hannes Rothe have introduced AGEL-Comp, a neuro-symbolic AI agent architecture designed to overcome a critical weakness in LLM-based agents: compositional generalization. Published on arXiv and accepted at IntelliSys 2026, the framework tackles the problem of agents failing to adapt to novel combinations of tasks or environments. AGEL-Comp integrates three core innovations: a dynamic Causal Program Graph (CPG) that models procedural and causal knowledge as a directed hypergraph, an Inductive Logic Programming (ILP) engine that synthesizes new Horn clauses from experiential feedback, and a hybrid reasoning core where an LLM proposes candidate sub-goals that are verified for logical consistency by a Neural Theorem Prover (NTP). This creates a deduction-abduction learning cycle: the agent deduces plans and abductively expands its symbolic world model, while a neural adaptation phase keeps its reasoning aligned with new knowledge.

The researchers evaluated AGEL-Comp within the Retro Quest simulation environment, specifically designing scenarios to probe compositional generalization. Their findings clearly show AGEL-Comp outperforming pure LLM-based models on these tasks. The framework represents a principled path toward agents that build explicit, interpretable, and compositionally structured understanding, addressing a key limitation in current AI systems. By grounding actions through symbolic reasoning and causal models, AGEL-Comp offers a promising approach for building more robust and adaptable interactive agents that can handle novel situations without catastrophic failure.

Key Points
  • AGEL-Comp uses a Causal Program Graph (CPG) as a world model to represent procedural and causal knowledge as a directed hypergraph.
  • It employs Inductive Logic Programming (ILP) to synthesize new Horn clauses from experiential feedback, grounding symbolic knowledge through interaction.
  • In the Retro Quest simulation, AGEL-Comp outperformed pure LLM-based models on compositional generalization scenarios.

Why It Matters

This work provides a blueprint for building more robust, interpretable AI agents that can handle novel situations without catastrophic failure.