Research & Papers

Sharpness-Aware Surrogate Training for On-Sensor Spiking Neural Networks

New training technique boosts on-sensor AI accuracy from 31.8% to 63.3% on event-camera benchmarks.

Deep Dive

A new research paper by Maximilian Nicholson introduces Sharpness-Aware Surrogate Training (SAST), a breakthrough method for training Spiking Neural Networks (SNNs). SNNs are brain-inspired models ideal for ultra-low-power, on-sensor processing in devices like event cameras, but they suffer from a major flaw: models trained with smooth 'surrogate' gradients perform poorly when deployed with the hard, binary spikes required for real hardware. SAST tackles this 'surrogate-to-hard transfer gap' by applying Sharpness-Aware Minimization (SAM) during training, which finds flatter, more robust regions in the loss landscape, making the final model much more stable when the hard threshold is applied.

On standard event-camera benchmarks, the results are dramatic. For the N-MNIST dataset, SAST boosted hard-spike accuracy from 65.7% to 94.7%. On the more complex DVS Gesture dataset, accuracy more than doubled, jumping from 31.8% to 63.3%. Crucially, SAST remains effective under realistic hardware constraints, including INT8 and INT4 weight quantization and fixed-point arithmetic. On N-MNIST with INT8 quantization, accuracy soared from 47.6% to 96.9%. The method also significantly reduces computational cost (SynOps), with one test showing a drop from 86.2 million to 4.3 million operations on DVS Gesture.

This work positions SAST as a powerful component in the toolbox for building efficient, deployable neuromorphic systems. By directly addressing the core performance bottleneck in SNN deployment, it brings practical, high-accuracy spiking AI for real-time vision tasks on edge devices much closer to reality.

Key Points
  • SAST reduces the critical performance gap when SNNs move from training to deployment, boosting DVS Gesture accuracy from 31.8% to 63.3%.
  • The method maintains strong performance under hardware-aware simulations, improving N-MNIST accuracy from 47.6% to 96.9% even with INT8 quantization.
  • It also reduces computational cost (SynOps), cutting operations on DVS Gesture by 95% (from 86.2M to 4.3M) in one test.

Why It Matters

Enables high-accuracy, ultra-low-power AI for real-time vision in autonomous drones, AR glasses, and IoT sensors by making spiking neural networks practical.