Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SpikingResformer: Bridging ResNet and Vision Transformer in Spiking Neural Networks

About

The remarkable success of Vision Transformers in Artificial Neural Networks (ANNs) has led to a growing interest in incorporating the self-attention mechanism and transformer-based architecture into Spiking Neural Networks (SNNs). While existing methods propose spiking self-attention mechanisms that are compatible with SNNs, they lack reasonable scaling methods, and the overall architectures proposed by these methods suffer from a bottleneck in effectively extracting local features. To address these challenges, we propose a novel spiking self-attention mechanism named Dual Spike Self-Attention (DSSA) with a reasonable scaling method. Based on DSSA, we propose a novel spiking Vision Transformer architecture called SpikingResformer, which combines the ResNet-based multi-stage architecture with our proposed DSSA to improve both performance and energy efficiency while reducing parameters. Experimental results show that SpikingResformer achieves higher accuracy with fewer parameters and lower energy consumption than other spiking Vision Transformer counterparts. Notably, our SpikingResformer-L achieves 79.40% top-1 accuracy on ImageNet with 4 time-steps, which is the state-of-the-art result in the SNN field.

Xinyu Shi, Zecheng Hao, Zhaofei Yu• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR10 (test)
Accuracy97.4
585
ClassificationCIFAR10-DVS
Accuracy84.8
145
Image ClassificationCIFAR-100 standard (test)
Top-1 Accuracy85.98
141
Image ClassificationCIFAR-100
Accuracy78.21
117
Image ClassificationCIFAR100
Accuracy79.28
102
Gesture RecognitionDVS-Gesture (test)
Accuracy93.4
79
Image ClassificationCIFAR10
Accuracy (%)96.24
46
Image ClassificationImageNet
Accuracy74.79
46
Image ClassificationCIFAR10 standard (test)
Top-1 Accuracy97.4
35
Image ClassificationCIFAR10-DVS Binary-grid
Accuracy81.3
27
Showing 10 of 25 rows

Other info

Code

Follow for update