Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks
New algorithm mimics brain's sparse pathways, outperforming on ImageNet while using less energy.
A team of researchers led by Bing Han from the Chinese Academy of Sciences has introduced a brain-inspired continual learning algorithm called SOR-SNN (Self-Organizing Regulation Spiking Neural Network). Published on arXiv, the work tackles a critical weakness of deep neural networks: catastrophic forgetting when learning multiple tasks sequentially. Unlike typical AI that allocates fixed resources or grows uncontrollably, SOR-SNN mimics how the human brain dynamically reorganizes sparse neural pathways. It uses a meta-learning controller to regulate which pathways are active for each new task, keeping overall energy consumption low. The authors report consistent superiority over existing methods on benchmarks including CIFAR100 and ImageNet, handling both simple child-like tasks and complex vision challenges. Notably, the model demonstrates backward transfer—applying knowledge from new tasks to improve performance on older ones—a rare capability in continual learning systems.
Another standout feature is the algorithm's self-repairing ability. When parts of the network are pruned or damaged, SOR-SNN automatically allocates new pathways from the remaining structure to recover forgotten memories. This resilience is critical for real-world deployment where hardware or network components may fail. The approach directly addresses the trade-off between performance and energy consumption that plagues most continual learning models: as tasks increase, SOR-SNN's energy consumption stays low while accuracy remains high. By enabling a single spiking neural network to incrementally master hundreds of tasks without catastrophic forgetting, this work pushes toward truly lifelong learning agents that can operate efficiently on edge devices.
- SOR-SNN achieves consistent superiority in performance and lower energy consumption across CIFAR100 and ImageNet continual learning benchmarks.
- The model exhibits backward transfer, using new tasks to improve old task accuracy, and self-repairing ability when network parts are pruned.
- It reorganizes limited resources into sparse neural pathways via Self-Organizing Regulation networks, enabling up to hundreds of tasks on a single SNN.
Why It Matters
Enables energy-efficient, lifelong learning AI that self-repairs and adapts—critical for edge devices, robotics, and real-world deployment.