A Multiplication-Free Spike-Time Learning Algorithm and its Efficient FPGA Implementation for On-Chip SNN Training
New spike-time learning algorithm eliminates floating-point math for on-chip training...
Researchers Maryam Mirsadeghi, Mojtaba Mirbagheri, and Saeed Reza Kheradpisheh have introduced a multiplication-free, spike-time-based learning algorithm for Spiking Neural Networks (SNNs) that is specifically designed for efficient Field-Programmable Gate Array (FPGA) implementation. The approach eliminates the need for floating-point arithmetic and explicit gradient storage, replacing them with a fully event-driven, digital training pipeline. This design significantly reduces computational complexity and power consumption, making it ideal for edge computing environments. The team implemented their architecture on a Xilinx Artix-7 FPGA, achieving high operating speeds and minimal resource usage while maintaining competitive accuracy. Software simulations validated the algorithm's scalability, achieving 96.5% accuracy on MNIST and 84.8% on Fashion-MNIST. The paper, submitted to arXiv on April 25, 2026, highlights how the spike-driven and multiplier-free operation delivers a practical and scalable hardware solution for real-time, on-chip SNN learning in edge environments. This breakthrough addresses a key hardware challenge in supervised training of SNNs, which are biologically inspired models known for low-power, event-driven intelligence. The elimination of complex arithmetic operations reduces the hardware footprint, enabling deployment on resource-constrained devices like sensors and IoT nodes. The work is categorized under Neural and Evolutionary Computing (cs.NE) and is available on arXiv with the identifier 2604.23218.
- Eliminates all floating-point arithmetic and gradient storage, enabling purely event-driven, digital training
- Achieves 96.5% accuracy on MNIST and 84.8% on Fashion-MNIST with minimal resource usage on Xilinx Artix-7 FPGA
- Delivers a scalable, low-power solution for real-time on-chip SNN learning in edge computing environments
Why It Matters
Enables low-power, real-time SNN training on edge devices without expensive math hardware.