mlx-snn: Spiking Neural Networks on Apple Silicon via MLX
First native spiking neural network library for Apple's MLX framework cuts GPU memory use by 10x.
Researcher Jiahao Qin has introduced mlx-snn, the first spiking neural network (SNN) library built natively for Apple's MLX framework. This fills a critical gap in the rapidly growing SNN research field, where all major libraries like snnTorch, Norse, and SpikingJelly have targeted PyTorch or custom backends, leaving Apple Silicon users without a native, optimized option. The library provides a comprehensive toolkit for SNN development, including six neuron models (like Leaky Integrate-and-Fire and Izhikevich), four surrogate gradient functions for training, and four spike encoding methods, one of which is specifically designed for EEG data.
The library leverages MLX's core advantages—unified memory architecture, lazy evaluation, and composable function transforms—to deliver significant performance gains on Apple hardware. In validation tests on MNIST digit classification, mlx-snn achieved up to 97.28% accuracy. More importantly, it demonstrated a 2.0 to 2.5 times speedup in training time and a dramatic 3 to 10 times reduction in GPU memory consumption compared to running the popular snnTorch library on the same M3 Max chip. Released as open-source under the MIT license and available on PyPI, mlx-snn provides a powerful, efficient foundation for conducting SNN research directly on Macs.
- First native SNN library for Apple's MLX framework, filling a gap left by PyTorch-focused tools like snnTorch.
- Achieves 2.0–2.5x faster training and 3–10x lower GPU memory use vs. snnTorch on an M3 Max, hitting 97.28% accuracy on MNIST.
- Provides a full research toolkit: six neuron models, four encoding methods (including EEG-specific), and a backpropagation-through-time pipeline.
Why It Matters
Enables efficient, native spiking neural network research on Apple Silicon, potentially accelerating development of low-power, brain-inspired AI.