Sharpness Aware Surrogate Training for Spiking Neural Networks
New training technique dramatically reduces the 'transfer gap' in neuromorphic AI systems.
Researcher Maximilian Nicholson has published a breakthrough paper titled 'Sharpness Aware Surrogate Training for Spiking Neural Networks' on arXiv. The work addresses a fundamental challenge in neuromorphic computing: the 'transfer gap' between surrogate models used during training and the actual spiking neural networks deployed on neuromorphic hardware. Traditional surrogate gradient methods couple a nonsmooth forward model with a biased gradient estimator, limiting real-world performance.
Nicholson's SAST method applies Sharpness Aware Minimization (SAM) to a surrogate forward SNN trained by backpropagation. This creates an ordinary smooth empirical risk as the optimization target, providing exact training gradients for the auxiliary model. The paper establishes theoretical guarantees including compact state stability, input Lipschitz bounds, and nonconvex convergence proofs for stochastic SAST with independent second minibatches.
The empirical results are striking. On the neuromorphic N-MNIST dataset, SAST increased hard spike accuracy from 65.7% to 94.7% (best at ρ=0.30) while maintaining high surrogate forward accuracy. On the more challenging DVS Gesture dataset, accuracy jumped from 31.8% to 63.3% (best at ρ=0.40). The method also demonstrated improved corruption robustness and manageable training overhead. The research includes comprehensive controls for compute matching, calibration, and theory alignment required for practical assessment.
This work represents a significant advancement in making spiking neural networks more practical for real-world deployment, particularly for edge computing applications where energy efficiency is critical. By dramatically reducing the transfer gap, SAST brings neuromorphic computing closer to commercial viability.
- SAST boosts N-MNIST accuracy from 65.7% to 94.7% (29% absolute improvement)
- Method reduces DVS Gesture transfer gap with 31.5% accuracy gain
- Provides theoretical guarantees including convergence proofs and stability bounds
Why It Matters
Enables more accurate and reliable neuromorphic AI for energy-efficient edge devices and robotics.