Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Spiking Vision Transformer with Saccadic Attention

About

The combination of Spiking Neural Networks (SNNs) and Vision Transformers (ViTs) holds potential for achieving both energy efficiency and high performance, particularly suitable for edge vision applications. However, a significant performance gap still exists between SNN-based ViTs and their ANN counterparts. Here, we first analyze why SNN-based ViTs suffer from limited performance and identify a mismatch between the vanilla self-attention mechanism and spatio-temporal spike trains. This mismatch results in degraded spatial relevance and limited temporal interactions. To address these issues, we draw inspiration from biological saccadic attention mechanisms and introduce an innovative Saccadic Spike Self-Attention (SSSA) method. Specifically, in the spatial domain, SSSA employs a novel spike distribution-based method to effectively assess the relevance between Query and Key pairs in SNN-based ViTs. Temporally, SSSA employs a saccadic interaction module that dynamically focuses on selected visual areas at each timestep and significantly enhances whole scene understanding through temporal interactions. Building on the SSSA mechanism, we develop a SNN-based Vision Transformer (SNN-ViT). Extensive experiments across various visual tasks demonstrate that SNN-ViT achieves state-of-the-art performance with linear computational complexity. The effectiveness and efficiency of the SNN-ViT highlight its potential for power-critical edge vision applications.

Shuai Wang, Malu Zhang, Dehao Zhang, Ammar Belatreche, Yichen Xiao, Yu Liang, Yimeng Shan, Qian Sun, Enqi Zhang, Yang Yang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10
Accuracy96.1
508
Image ClassificationCIFAR100
Accuracy80.1
347
Image ClassificationCIFAR10
Top-1 Accuracy96.1
112
Image ClassificationImageNet-1K
Accuracy76.87
32
Neuromorphic Image ClassificationDVS-CIFAR10
Accuracy82.3
23
Image ClassificationImageNet-1K
Power Consumption (mJ)35.75
16
Image ClassificationCIFAR10-DVS
Accuracy82.3
12
Showing 7 of 7 rows

Other info

Follow for update