Research & Papers

Causal Learning with Neural Assemblies

Neural assemblies can now learn cause-effect relationships without backpropagation.

Deep Dive

A new paper by Evangelia Kopadi and Dimitris Kalles demonstrates that neural assemblies—groups of neurons that fire together and strengthen through co-activation—can learn the direction of causal influence between variables. The researchers introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations. Unlike backpropagation-based methods, DIRECT relies solely on local plasticity, making the resulting causal claims auditable at the mechanism level. This represents a significant step toward biologically plausible causal learning.

The framework's effectiveness is verified through a dual-readout validation strategy: synaptic-strength asymmetry, which measures the emergent weight gap between forward and reverse links, and functional propagation overlap, which quantifies the reliability of directional signal flow. Across multiple domains, DIRECT achieves perfect structural recovery under a supervised, known-structure setting. These results establish neural assemblies as an auditable bridge between biologically plausible dynamics and formal causal models, offering an "explainable by design" framework where causal claims are traceable to specific neural winners and synaptic asymmetries. This work opens new avenues for interpretable AI systems that can learn cause-effect relationships in a manner akin to biological neural networks.

Key Points
  • DIRECT mechanism uses local plasticity, not backpropagation, to learn causal directionality.
  • Validated via synaptic-strength asymmetry and functional propagation overlap metrics.
  • Achieves perfect structural recovery in supervised settings across multiple domains.

Why It Matters

Enables auditable, explainable causal learning in AI, mimicking biological neural processes.