Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TEFormer: Structured Bidirectional Temporal Enhancement Modeling in Spiking Transformers

About

In recent years, Spiking Neural Networks (SNNs) have achieved remarkable progress, with Spiking Transformers emerging as a promising architecture for energy-efficient sequence modeling. However, existing Spiking Transformers still lack a principled mechanism for effective temporal fusion, limiting their ability to fully exploit spatiotemporal dependencies. Inspired by feedforward-feedback modulation in the human visual pathway, we propose TEFormer, the first Spiking Transformer framework that achieves bidirectional temporal fusion by decoupling temporal modeling across its core components. Specifically, TEFormer employs a lightweight and hyperparameter-free forward temporal fusion mechanism in the attention module, enabling fully parallel computation, while incorporating a backward gated recurrent structure in the MLP to aggregate temporal information in reverse order and reinforce temporal consistency. Extensive experiments across a wide range of benchmarks demonstrate that TEFormer consistently and significantly outperforms strong SNN and Spiking Transformer baselines under diverse datasets. Moreover, through the first systematic evaluation of Spiking Transformers under different neural encoding schemes, we show that the performance gains of TEFormer remain stable across encoding choices, indicating that the improved temporal modeling directly translates into reliable accuracy improvements across varied spiking representations. These results collectively establish TEFormer as an effective and general framework for temporal modeling in Spiking Transformers.

Sicheng Shen, Mingyang Lv, Bing Han, Dongcheng Zhao, Guobin Shen, Feifei Zhao, Yi Zeng• 2026

Related benchmarks

TaskDatasetResultRank
Object ClassificationN-CARS (test)
Accuracy95.95
53
Image ClassificationCIFAR10 standard (test)
Top-1 Accuracy96.24
35
Sequential Image ClassificationsMNIST
Accuracy96.2
18
Event-based Image ClassificationDVS CIFAR10 (test)
Accuracy81.9
17
Spoken Digit RecognitionSHD
Accuracy90.19
16
Image ClassificationCIFAR100 standard (test)
Top-1 Accuracy79.84
13
Action RecognitionHMDB51-DVS
Accuracy63.65
13
Action RecognitionUCF101-DVS
Accuracy63.16
13
Event-based Object RecognitionN-Caltech101 (test)
Top-1 Accuracy78.5
6
Sequential Image ClassificationsCIFAR
Top-1 Accuracy80.94
2
Showing 10 of 10 rows

Other info

Follow for update