TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training
New tensor decomposition technique cuts SNN training time 17.7% and reduces parameters nearly 8x.
A research team led by Donghyun Lee has developed TT-SNN, a novel method applying Tensor Train Decomposition to Spiking Neural Networks. SNNs are promising for low-power AI due to their sparse, event-driven activations that mimic biological neurons, but their training is notoriously memory and compute-intensive because of spatio-temporal dynamics. TT-SNN addresses this by decomposing the network's weight tensors into a trainable, compressed format, drastically shrinking the model's footprint and computational demands.
Validated on static (CIFAR10/100) and dynamic (N-Caltech101) vision datasets, the results are significant. For N-Caltech101, TT-SNN achieved a 7.98X reduction in parameters and a 9.25X reduction in FLOPs. More importantly for real-world deployment, it cut training time by 17.7% and training energy consumption by 28.3%, all with negligible accuracy degradation. The team also designed a parallel computation pipeline and a custom training accelerator to fully exploit the method's inherent parallelism, moving it beyond a theoretical compression technique into a practical systems solution.
This work represents the first application of tensor decomposition specifically tailored for SNNs. By tackling the core inefficiency of SNN training, TT-SNN lowers a major barrier to their adoption in resource-constrained environments like mobile devices, sensors, and neuromorphic chips, where their energy-efficient inference is most valuable.
- Achieves 7.98X parameter reduction and 9.25X FLOP reduction on N-Caltech101 dataset.
- Cuts training energy by 28.3% and training time by 17.7% with minimal accuracy loss.
- First-ever application of Tensor Train Decomposition to Spiking Neural Networks, includes a proposed custom accelerator.
Why It Matters
Makes brain-inspired, ultra-low-power AI models feasible to train, unlocking their potential for edge and mobile devices.