Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

STAER: Temporal Aligned Rehearsal for Continual Spiking Neural Network

About

Spiking Neural Networks (SNNs) are inherently suited for continuous learning due to their event-driven temporal dynamics; however, their application to Class-Incremental Learning (CIL) has been hindered by catastrophic forgetting and the temporal misalignment of spike patterns. In this work, we introduce Spiking Temporal Alignment with Experience Replay (STAER), a novel framework that explicitly preserves temporal structure to bridge the performance gap between SNNs and ANNs. Our approach integrates a differentiable Soft-DTW alignment loss to maintain spike timing fidelity and employs a temporal expansion and contraction mechanism on output logits to enforce robust representation learning. Implemented on a deep ResNet19 spiking backbone, STAER achieves state-of-the-art performance on Sequential-MNIST and Sequential-CIFAR10. Empirical results demonstrate that our method matches or outperforms strong ANN baselines (ER, DER++) while preserving biologically plausible dynamics. Ablation studies further confirm that explicit temporal alignment is critical for representational stability, positioning STAER as a scalable solution for spike-native lifelong learning. Code is available at https://github.com/matteogianferrari/staer.

Matteo Gianferrari, Omayma Moussadek, Riccardo Salami, Cosimo Fiorini, Lorenzo Tartarini, Daniela Gandolfi, Simone Calderara• 2026

Related benchmarks

TaskDatasetResultRank
Continual LearningSequential MNIST
Avg Acc99.88
149
Class-Incremental Continual LearningCIFAR-10 Sequential
Forgetting11.45
39
Task-Incremental LearningCIFAR10 Sequential
Final Average Accuracy96.24
39
Class-incremental learningCIFAR-10 Sequential
FAA83.53
39
Class-incremental learningSequential MNIST
Forgetting0.44
33
Showing 5 of 5 rows

Other info

Follow for update