Research & Papers

Real-Time Frame- and Event-based Object Detection with Spiking Neural Networks on Edge Neuromorphic Hardware: Design, Deployment and Benchmark

Spiking neural networks on Intel's neuromorphic chip achieve 87–100% of ANN accuracy at a fraction of power

Deep Dive

A research team led by Udayanga G.W.K.N. Gamage at Chalmers University of Technology, in collaboration with Intel Labs, has published a comprehensive methodology for designing and deploying spiking neural networks (SNNs) on the Intel Loihi 2 neuromorphic processor. Their system targets real-time object detection for energy-constrained platforms such as UAVs, autonomous navigation, and mobile robotics. The team benchmarked SNN-based detection using both frame-based and event-based datasets, comparing performance against conventional ANN-based detection on the NVIDIA Jetson Orin Nano (60W), Jetson Nano B01 (10W), and the Apple M2 CPU (15W). Results show that Loihi 2 delivers the lowest per-inference dynamic energy and overall power consumption, though the Jetson Orin Nano achieves higher inference rates. The research is published on arXiv (2605.00146) and in Neurocomputing (DOI: 10.1016/j.neucom.2026.133820).

A key innovation in the paper is distillation-aware training, which transfers knowledge from a pre-trained ANN teacher to the SNN student. This technique enables the SNN to recover 87–100% of the ANN's detection accuracy while maintaining lower inference latency and significantly lower energy consumption. Without distillation, SNNs exhibited an 11–27% accuracy drop. The findings highlight the potential of neuromorphic systems for ultra-low-power edge AI. For professionals in robotics and embedded vision, this means battery-powered devices can run real-time object detection without sacrificing accuracy, opening doors to longer flight times for drones, extended autonomy for mobile robots, and new applications in remote inspection.

Key Points
  • Intel Loihi 2 achieves the lowest per-inference dynamic energy among all test platforms (Jetson Orin Nano, Jetson Nano B01, Apple M2 CPU).
  • Distillation-aware training recovers 87–100% of ANN detection accuracy in SNNs; without distillation, accuracy drops 11–27%.
  • SNNs on Loihi 2 run real-time object detection with lower power consumption overall, though Jetson Orin Nano has higher inference rates.
  • Target applications include UAV inspection, autonomous navigation, and mobile robotics on energy-constrained edge devices.

Why It Matters

Enables battery-powered drones and robots to run real-time object detection with near-ANN accuracy and drastically lower energy use.