Energy-Efficient Implementation of Spiking Recurrent Cells on FPGA
Pascal Harmeling, Florent De Geeter, Guillaume Drion
- Published
- May 11, 2026 — 14:56 UTC
- Summary length
- 418 words
- Relevance score
- 80%
Problem
This preprint addresses the gap in energy-efficient implementations of Spiking Neural Networks (SNNs) on Field-Programmable Gate Arrays (FPGAs). While SNNs can potentially reduce energy consumption compared to traditional Artificial Neural Networks (ANNs), existing biologically faithful models are often too resource-intensive for FPGA deployment. Conversely, simpler models like Integrate-and-Fire (IR) or Leaky Integrate-and-Fire (LIF) compromise on neuronal dynamics. The authors propose a solution that balances biological plausibility with hardware efficiency through the introduction of Spiking Recurrent Cell (SRC) neurons.
Method
The core technical contribution is the design of an FPGA accelerator for SNNs utilizing SRC neurons. The authors introduce mathematical simplifications that eliminate costly unary operations (e.g., \textit{tanh}, \textit{exp}) and avoid floating-point arithmetic by employing scaling and piecewise-defined approximations. The entire network is implemented in VHDL, with weight matrices computed offline and stored directly in Look-Up Table (LUT) registers, ensuring robustness in SRC cell performance. The implementation is validated using spiking traces derived from the MNIST dataset, with experiments conducted on an Artix-7 XC7A200T FPGA clocked at 100 MHz.
Results
The reference implementation achieves an accuracy of 96.31% using a 220-image spiking trace, with a processing time of 1.7424 ms per digit. The authors explore accuracy-energy trade-offs by varying the spiking trace length and quantizing synaptic weights. They report 93.32% accuracy at an energy consumption of 0.55 mJ per digit (using 55 images and 5-bit weights) and 92.89% accuracy at 0.45 mJ (using 44 images and 4-bit weights). These results indicate that SRC-based SNNs can maintain competitive performance while significantly reducing energy consumption compared to standard LIF/IR models.
Limitations
The authors acknowledge that their approach may not fully capture the complexity of biological neuron dynamics due to the simplifications made. Additionally, the reliance on specific hardware (Artix-7 FPGA) may limit the generalizability of the results to other platforms. The study does not explore the scalability of the proposed architecture beyond the MNIST dataset, nor does it address potential impacts on performance with larger datasets or more complex tasks.
Why it matters
This work has significant implications for the deployment of SNNs in energy-constrained environments, such as edge computing and IoT devices. By demonstrating that SRC-based SNNs can achieve competitive accuracy with reduced energy consumption, the research paves the way for more biologically inspired models in practical applications. The proposed methods for simplifying neuron dynamics and optimizing hardware implementation could influence future designs of energy-efficient neural architectures, potentially leading to broader adoption of SNNs in real-world applications.
Authors: Pascal Harmeling, Florent De Geeter, Guillaume Drion
Source: arXiv:2605.10679
URL: https://arxiv.org/abs/2605.10679v1