STAER: Temporal Aligned Rehearsal for Continual Spiking Neural Network
About
Spiking Neural Networks (SNNs) are inherently suited for continuous learning due to their event-driven temporal dynamics; however, their application to Class-Incremental Learning (CIL) has been hindered by catastrophic forgetting and the temporal misalignment of spike patterns. In this work, we introduce Spiking Temporal Alignment with Experience Replay (STAER), a novel framework that explicitly preserves temporal structure to bridge the performance gap between SNNs and ANNs. Our approach integrates a differentiable Soft-DTW alignment loss to maintain spike timing fidelity and employs a temporal expansion and contraction mechanism on output logits to enforce robust representation learning. Implemented on a deep ResNet19 spiking backbone, STAER achieves state-of-the-art performance on Sequential-MNIST and Sequential-CIFAR10. Empirical results demonstrate that our method matches or outperforms strong ANN baselines (ER, DER++) while preserving biologically plausible dynamics. Ablation studies further confirm that explicit temporal alignment is critical for representational stability, positioning STAER as a scalable solution for spike-native lifelong learning. Code is available at https://github.com/matteogianferrari/staer.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continual Learning | Sequential MNIST | Avg Acc99.88 | 149 | |
| Class-Incremental Continual Learning | CIFAR-10 Sequential | Forgetting11.45 | 39 | |
| Task-Incremental Learning | CIFAR10 Sequential | Final Average Accuracy96.24 | 39 | |
| Class-incremental learning | CIFAR-10 Sequential | FAA83.53 | 39 | |
| Class-incremental learning | Sequential MNIST | Forgetting0.44 | 33 |