SNAP-V: A RISC-V SoC with Configurable Neuromorphic Acceleration for Small-Scale Spiking Neural Networks
A new RISC-V chip uses neuromorphic acceleration to run spiking neural networks with extreme energy efficiency.
A research team from the University of Moratuwa, led by Kanishka Gunawardana, has unveiled SNAP-V, a novel RISC-V-based System on Chip (SoC) designed to solve a key bottleneck in edge AI. The chip is specifically engineered for Spiking Neural Networks (SNNs), which mimic the brain's sparse, event-driven communication for ultra-low-power computation. SNAP-V addresses the inefficiency of using either conventional SoCs (which suffer from memory bottlenecks) or large, over-provisioned neuromorphic hardware for small-scale SNN tasks. It does this by integrating a RISC-V core for management with two configurable, on-chip neuromorphic accelerators named Cerebra-S and Cerebra-H.
The Cerebra-S accelerator uses a bus-based architecture, while the more advanced Cerebra-H employs a Network-on-Chip (NoC) for better scalability. Both are built with parallel processing nodes and distributed memory to minimize data movement, the primary source of energy waste in AI chips. Fabricated in a 45nm CMOS process, the hardware demonstrated remarkable efficiency, consuming an average of just 1.05 picojoules (pJ) per synaptic operation. Furthermore, it maintained high accuracy, with hardware inference deviating from software simulation by only 2.62% on average across tested networks.
This combination of the open RISC-V instruction set and specialized neuromorphic architecture makes SNAP-V a compelling platform for the next generation of intelligent edge devices. It paves the way for always-on sensors, wearable health monitors, and other battery-constrained applications to run sophisticated, real-time AI locally without relying on power-hungry cloud connections.
- Integrates a RISC-V core with two neuromorphic accelerators (Cerebra-S and Cerebra-H) optimized for small-scale Spiking Neural Networks (SNNs).
- Achieves extreme energy efficiency of 1.05 picojoules (pJ) per synaptic operation in 45nm CMOS technology.
- Maintains high accuracy with only a 2.62% average deviation between software simulation and hardware inference.
Why It Matters
Enables real-time, sophisticated AI directly on low-power edge devices, reducing reliance on the cloud and extending battery life.