Research & Papers

Scalable Learning in Structured Recurrent Spiking Neural Networks without Backpropagation

Bo Tang and Weiwei Xie's structured recurrent SNN with local plasticity achieves stable learning.

Deep Dive

Bo Tang and Weiwei Xie present a novel framework for scalable learning in structured recurrent Spiking Neural Networks (SNNs) that eliminates the need for backpropagation. Their architecture consists of multiple locally dense recurrent layers augmented with sparse small-world long-range projections to a readout population. These long-range connections are largely fixed, preserving routing efficiency and hardware scalability. Synaptic adaptation is performed using strictly local plasticity mechanisms, addressing a major challenge in training deep recurrent SNNs with sparse connectivity. The work is published on arXiv (arXiv:2605.00402) and spans 7 pages with 2 figures, submitted on May 1, 2026.

To enable supervised learning without backpropagation, the authors introduce a biologically motivated framework combining three components: population-based winner-take-all (WTA) teaching signals at the output layer, fixed random broadcast alignment feedback pathways, and low-dimensional modulatory neuron populations that gate synaptic updates through three-factor learning rules with eligibility traces. This design supports deep recurrent computation with sparse global communication and purely local updates. The approach demonstrates stable learning and competitive performance on benchmark classification tasks, highlighting its potential for energy-efficient and hardware-compatible SNN training beyond gradient-based methods.

Key Points
  • No backpropagation or surrogate gradients; uses local plasticity mechanisms
  • Sparse small-world long-range projections with fixed connectivity for hardware scalability
  • Competitive performance on benchmark classification tasks with stable learning

Why It Matters

Enables energy-efficient, biologically plausible SNN training for scalable neuromorphic hardware.