Working Memory in a Recurrent Spiking Neural Networks With Heterogeneous Synaptic Delays
A novel spiking neural network achieves a perfect F1 score of 1.0 for storing and recalling complex temporal patterns.
A new research paper by Laurent U Perrinet introduces a breakthrough approach to working memory in spiking neural networks (SNNs). The model uses a recurrent SNN architecture where each synapse between N neurons is equipped with D=41 distinct synaptic delays, represented as a 3D weight tensor W ∈ ℝ^{N×N×D}. This heterogeneous delay system allows the network to store arbitrary temporal spike patterns by encoding them as sequential chains of overlapping "Spiking Motifs"—contiguous windows that uniquely predict spikes at the next time step.
The network was trained end-to-end using surrogate-gradient backpropagation through time, a method that enables gradient-based optimization in non-differentiable spiking networks. On a synthetic benchmark task involving M=16 complex patterns with N=512 neurons over T=1000 time steps, the trained network achieved a perfect mean F1 score of 1.0 for pattern recall. The researchers observed that recall emerged first near the clamped initialization window and propagated forward in time, demonstrating the network's ability to maintain and retrieve precise temporal sequences.
This work represents a significant advancement in neuromorphic computing, showing that heterogeneous synaptic delays provide an efficient substrate for working memory in SNNs. The approach enables energy-efficient deployment on neuromorphic hardware at the edge, where traditional artificial neural networks struggle with temporal processing and power constraints. The perfect recall performance on complex patterns suggests practical applications in real-time sensory processing, robotics, and brain-machine interfaces.
- Uses 41 distinct synaptic delays per connection modeled as a 3D weight tensor W ∈ ℝ^{N×N×D}
- Achieved perfect F1 score of 1.0 on benchmark of 16 patterns with 512 neurons over 1000 time steps
- Enables energy-efficient temporal pattern storage for neuromorphic edge deployment
Why It Matters
Enables efficient temporal processing for neuromorphic hardware, advancing real-time AI applications at the edge.