Research & Papers

YANA: Bridging the Neuromorphic Simulation-to-Hardware Gap

Researchers release open-source FPGA accelerator that processes spiking neural networks at one event per cycle.

Deep Dive

A research team from Karlsruhe Institute of Technology has introduced YANA (Yet Another Neuromorphic Accelerator), an open-source FPGA-based digital accelerator designed to solve a critical bottleneck in neuromorphic computing. While Spiking Neural Networks (SNNs) promise significant power efficiency advantages for processing temporal data streams, their development has been hampered by the "simulation-to-hardware gap"—the limited availability of actual neuromorphic hardware for testing and deployment. YANA bridges this gap by providing a complete, accessible hardware and software framework that allows researchers to move algorithms from simulation to real silicon.

The YANA architecture implements a sophisticated five-stage, event-driven processing pipeline that fully exploits the temporal and spatial sparsity inherent to SNNs. A key innovation is its input preprocessing scheme, which maintains steady event processing at one event per cycle while eliminating buffer overflow risks. The system also uses hardware-efficient lookup tables for neuron leak calculations. Deployed on the consumer-accessible AMD Kria KR260 robotics platform, a single YANA core utilizes just 740 LUTs and 918 registers while supporting networks with up to 131,072 synapses and 1,024 neurons.

The team has released the entire YANA framework as open-source, creating an end-to-end solution that integrates with existing neuromorphic tools through the Neuromorphic Intermediate Representation (NIR). This allows researchers to train SNNs using standard frameworks, optimize them for the YANA hardware, and deploy them on actual FPGA platforms. In experiments on the Spiking Heidelberg Digits dataset, YANA demonstrated near-linear scaling of inference time with both spatial and temporal sparsity levels, validating its efficient event-driven design.

Key Points
  • Open-source FPGA accelerator processes SNNs at one event per cycle without buffer overflow
  • Supports networks with up to 131,072 synapses and 1,024 neurons on accessible AMD Kria platform
  • Provides complete end-to-end framework for training, optimizing, and deploying SNNs on real hardware

Why It Matters

Democratizes neuromorphic hardware development, enabling faster innovation in energy-efficient AI for edge devices and robotics.