ShiftLIF: Efficient Multi-Level Spiking Neurons with Power-of-Two Quantization
No costly multiplications needed — just bit-shifts for 10x energy savings
Spiking neural networks (SNNs) promise ultra-low-power edge sensing thanks to event-driven computation. But standard leaky integrate-and-fire (LIF) neurons only output binary spikes, severely limiting representational capacity. Past attempts to add multiple spike levels used uniform quantization, which either mismatched the natural membrane potential distribution or introduced expensive synaptic multiplications—defeating the energy advantage. Now researchers from NUS and partners introduce ShiftLIF, a neuron design that maps membrane potentials to a logarithmically spaced power-of-two spike set. Because membrane potentials concentrate in small-amplitude regimes, the log spacing provides finer resolution where it matters most, while large values are covered coarsely. Crucially, this eliminates all multiply-accumulate operations (MACs) at the synapse—replacing them with simple bit-shift and accumulate (bit-AC) operations, which are far cheaper in both energy and silicon area.
The team evaluated ShiftLIF across 10 diverse sensing datasets: wireless signal classification (e.g., RadioML), acoustic keyword spotting, motion gesture recognition, and standard vision tasks like CIFAR-10/100 and ImageNet via spiking ResNets. Results show ShiftLIF consistently matches or exceeds the accuracy of competing multi-level neurons (e.g., LIF with 2-bit uniform quantization or multi-threshold LIF) while consuming synaptic energy very close to the binary LIF baseline (often within 10–20% overhead). For example, on the DVS128 Gesture dataset, ShiftLIF achieves 96.3% accuracy vs. 95.8% for the best prior work, with energy per inference of only 1.8× that of binary LIF (other multi-level methods are 3–5× higher). This breakthrough makes high-performance SNN inference truly viable for power-constrained edge devices like wearables, IoT sensors, and embedded robotics, without sacrificing accuracy gains from richer spike representations.
- ShiftLIF maps membrane potentials to a logarithmically spaced power-of-two spike set, matching dense small-amplitude regions with finer resolution
- Replaces expensive synaptic multiplications with bit-shift and accumulate (bit-AC) operations, drastically cutting energy without loss of representational power
- Tested on 10 datasets across wireless, acoustic, motion, and visual sensing; matches or exceeds existing multi-level neurons while keeping energy near standard binary LIF levels
Why It Matters
Makes high-accuracy spiking neural nets practical for power-constrained edge devices like wearables and IoT sensors