Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips

About

Neuromorphic computing, which exploits Spiking Neural Networks (SNNs) on neuromorphic chips, is a promising energy-efficient alternative to traditional AI. CNN-based SNNs are the current mainstream of neuromorphic computing. By contrast, no neuromorphic chips are designed especially for Transformer-based SNNs, which have just emerged, and their performance is only on par with CNN-based SNNs, offering no distinct advantage. In this work, we propose a general Transformer-based SNN architecture, termed as ``Meta-SpikeFormer", whose goals are: 1) Lower-power, supports the spike-driven paradigm that there is only sparse addition in the network; 2) Versatility, handles various vision tasks; 3) High-performance, shows overwhelming performance advantages over CNN-based SNNs; 4) Meta-architecture, provides inspiration for future next-generation Transformer-based neuromorphic chip designs. Specifically, we extend the Spike-driven Transformer in \citet{yao2023spike} into a meta architecture, and explore the impact of structure, spike-driven self-attention, and skip connection on its performance. On ImageNet-1K, Meta-SpikeFormer achieves 80.0\% top-1 accuracy (55M), surpassing the current state-of-the-art (SOTA) SNN baselines (66M) by 3.7\%. This is the first direct training SNN backbone that can simultaneously supports classification, detection, and segmentation, obtaining SOTA results in SNNs. Finally, we discuss the inspiration of the meta SNN architecture for neuromorphic chip design. Source code and models are available at \url{https://github.com/BICLab/Spike-Driven-Transformer-V2}.

Man Yao, Jiakui Hu, Tianxiang Hu, Yifan Xu, Zhaokun Zhou, Yonghong Tian, Bo Xu, Guoqi Li• 2024

Related benchmarks

TaskDatasetResultRank
Skeleton-based Action RecognitionNTU RGB+D (Cross-View)
Accuracy83.6
213
Skeleton-based Action RecognitionNTU RGB+D 120 Cross-Subject
Top-1 Accuracy64.3
143
Skeleton-based Action RecognitionNTU-RGB+D 120 (Cross-setup)
Accuracy65.9
136
Skeleton-based Action RecognitionNTU RGB+D (Cross-subject)
Accuracy77.4
123
Cross-view geo-localizationUniversity-1652 Drone -> Satellite
R@178.94
69
Drone-to-Satellite RetrievalSUES-200 150m
R@176.62
54
Drone-to-Satellite RetrievalSUES-200 250m
R@188.77
54
Skeleton-based Action RecognitionNW-UCLA
Accuracy89.4
44
Event-based action recognitionHARDVS--
22
Drone-view geo-localizationSUES-200 Drone→Satellite, 200m altitude 1.0 (test)
R@191.25
20
Showing 10 of 14 rows

Other info

Follow for update