Linearized Bregman Iterations for Sparse Spiking Neural Networks
New optimizer slashes active parameters in Spiking Neural Networks by half while maintaining accuracy.
A team of researchers including Daniel Windhager, Bernhard A. Moser, and Michael Lunglmayr has published a paper introducing Linearized Bregman Iterations (LBI) as a new training method for Spiking Neural Networks (SNNs). SNNs are a promising, energy-efficient alternative to standard Artificial Neural Networks (ANNs) but often remain computationally heavy due to large parameter counts. The LBI optimizer tackles this by enforcing sparsity—drastically reducing the number of active, non-zero parameters—through an iterative process that minimizes the Bregman distance and applies proximal soft thresholding updates. To improve performance, the team employed AdaBreg, a momentum and bias-corrected variant of the popular Adam optimizer adapted for the Bregman framework.
The method's effectiveness was validated on three major neuromorphic benchmarks: the Spiking Heidelberg Digits (SHD), Spiking Speech Commands (SSC), and Permuted Sequential MNIST (PSMNIST) datasets. The results were significant: models trained with LBI achieved approximately a 50% reduction in active parameters while maintaining accuracy levels on par with those trained using the standard Adam optimizer. This breakthrough demonstrates that convex, sparsity-inducing optimization techniques like LBI are highly effective for SNNs, directly addressing a key bottleneck for their practical deployment in low-power, neuromorphic computing hardware where efficiency is paramount.
- Introduces Linearized Bregman Iterations (LBI), a new optimizer that enforces sparsity in Spiking Neural Network training.
- Achieves a ~50% reduction in active parameters on SHD, SSC, and PSMNIST benchmarks without sacrificing model accuracy.
- Employs the AdaBreg optimizer, a Bregman-adapted version of Adam, to improve convergence and generalization during training.
Why It Matters
This directly enables more efficient, smaller SNNs, critical for deploying AI on low-power neuromorphic and edge computing hardware.