Training Deep Normalization-Free Spiking Neural Networks with Lateral Inhibition
A new method uses biological 'lateral inhibition' to train energy-efficient spiking neural networks 10x deeper than before.
A team of researchers has published a significant advance in neuromorphic computing with their paper 'Training Deep Normalization-Free Spiking Neural Networks with Lateral Inhibition,' accepted to the prestigious ICLR 2026 conference. The work tackles a fundamental conflict in Spiking Neural Networks (SNNs), which are prized for their extreme energy efficiency and biological plausibility but have historically required artificial 'normalization' techniques to train effectively at depth, compromising their realism. The researchers propose a novel framework that replaces traditional SNN layers with a computational model inspired by cortical circuits in the brain, featuring separate populations of excitatory and inhibitory neurons that interact dynamically.
The core innovation is the implementation of 'lateral inhibition,' where inhibitory neurons suppress the activity of their excitatory neighbors, a mechanism ubiquitous in biological neural systems. This E-I (excitatory-inhibitory) circuit provides natural activity regulation through subtractive and divisive inhibition, eliminating the need for external normalization. To make this system trainable with backpropagation, the team developed two key techniques: 'E-I Init' for balanced parameter initialization and 'E-I Prop' for stable gradient flow. Experiments demonstrate this framework can successfully train deep, normalization-free SNNs that maintain biological constraints while achieving performance competitive with conventional methods on benchmark datasets. This breakthrough provides both a practical solution for building more efficient AI hardware and a computational platform for neuroscientists to model large-scale brain function.
- Proposes a normalization-free training framework for Spiking Neural Networks (SNNs) using biologically inspired 'lateral inhibition' with separate E and I neuron populations.
- Introduces two novel stabilization techniques: 'E-I Init' for parameter balancing and 'E-I Prop' for decoupling backpropagation, enabling end-to-end training of deep networks.
- Enables stable training of biologically constrained, deep SNNs that achieve competitive performance, bridging the gap between AI efficiency and neurological realism.
Why It Matters
Paves the way for more energy-efficient, brain-inspired AI hardware and provides a better model for computational neuroscience research.