Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity

About

Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive. In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions. FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps. We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our proposed method. Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy. This makes FPT a scalable and efficient solution for real-world applications, particularly for long-term tasks. Our code will be released at \href{https://github.com/WanjinVon/FPT}{\texttt{https://github.com/WanjinVon/FPT}}.

Wanjin Feng, Xingyu Gao, Wenqian Du, Hailong Shi, Peilin Zhao, Pengcheng Wu, Chunyan Miao• 2025

Related benchmarks

TaskDatasetResultRank
ClassificationCIFAR10-DVS
Accuracy85.5
133
Image ClassificationImageNet-100
Accuracy83.27
84
Sequential Image ClassificationSequential CIFAR10
Accuracy88.45
48
Sequential Image ClassificationS-CIFAR100
Accuracy (%)62.21
7
Showing 4 of 4 rows

Other info

Follow for update