Research & Papers

General aspects of internal noise in spiking neural networks

New study reveals multiplicative membrane noise degrades SNN accuracy by silencing neurons.

Deep Dive

A team of researchers including I.D. Kolesnikov and N. Semenova has published a new study, 'General aspects of internal noise in spiking neural networks,' that systematically analyzes how different types of internal noise affect the performance of spiking neural networks (SNNs). The research tested additive and multiplicative noise introduced at three critical stages: the input current, the membrane potential of a leaky integrate-and-fire (LIF) neuron, and the output spike generation. The key finding is that multiplicative noise applied directly to the membrane potential is uniquely destructive, causing the most significant performance degradation by driving membrane potentials to large negative values and effectively 'silencing' neuronal activity.

To combat this vulnerability, the researchers evaluated input pre-filtering strategies. A sigmoid-based filter, which shifts inputs to a strictly positive range, proved most effective. With this filter in place, the dominant source of error shifts to additive noise in the input current, while other noise configurations—including the problematic multiplicative membrane noise—cause accuracy to drop by no more than 1%, even under high noise intensity. The study also compared common noise (affecting all neurons similarly) versus uncommon noise (independent per neuron) across populations, finding SNNs are more robust to common noise.

Overall, this work provides a crucial map of noise sensitivity for SNNs, which are promising for low-power, neuromorphic computing. By identifying the membrane potential as a critical attack point for noise and validating a practical, filter-based mitigation strategy, the research offers clear engineering guidance for building more reliable and robust brain-inspired hardware and algorithms.

Key Points
  • Multiplicative noise on a neuron's membrane potential is the most damaging, suppressing activity and degrading SNN accuracy.
  • A sigmoid-based input pre-filter mitigates this, limiting accuracy loss to under 1% for most noise types at high intensity.
  • SNNs show greater robustness to common noise (affecting all neurons similarly) than to uncommon, independent noise.

Why It Matters

Provides a blueprint for building more reliable, noise-resistant neuromorphic chips and spiking AI models for edge devices.