Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Masked Spiking Transformer

About

The combination of Spiking Neural Networks (SNNs) and Transformers has attracted significant attention due to their potential for high energy efficiency and high-performance nature. However, existing works on this topic typically rely on direct training, which can lead to suboptimal performance. To address this issue, we propose to leverage the benefits of the ANN-to-SNN conversion method to combine SNNs and Transformers, resulting in significantly improved performance over existing state-of-the-art SNN models. Furthermore, inspired by the quantal synaptic failures observed in the nervous system, which reduces the number of spikes transmitted across synapses, we introduce a novel Masked Spiking Transformer (MST) framework that incorporates a Random Spike Masking (RSM) method to prune redundant spikes and reduce energy consumption without sacrificing performance. Our experimental results demonstrate that the proposed MST model achieves a significant reduction of 26.8% in power consumption when the masking ratio is 75% while maintaining the same level of performance as the unmasked model.

Ziqing Wang, Yuetong Fang, Jiahang Cao, Qiang Zhang, Zhongrui Wang, Renjing Xu• 2022

Related benchmarks

TaskDatasetResultRank
Event-based action recognitionPAF
Top-1 Accuracy90.14
7
Showing 1 of 1 rows

Other info

Follow for update