Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MK-SGN: A Spiking Graph Convolutional Network with Multimodal Fusion and Knowledge Distillation for Skeleton-based Action Recognition

About

In recent years, multimodal Graph Convolutional Networks (GCNs) have achieved remarkable performance in skeleton-based action recognition. The reliance on high-energy-consuming continuous floating-point operations inherent in GCN-based methods poses significant challenges for deployment in energy-constrained, battery-powered edge devices. To address these limitations, MK-SGN, a Spiking Graph Convolutional Network with Multimodal Fusion and Knowledge Distillation, is proposed to leverage the energy efficiency of Spiking Neural Networks (SNNs) for skeleton-based action recognition for the first time. By integrating the energy-saving properties of SNNs with the graph representation capabilities of GCNs, MK-SGN achieves significant reductions in energy consumption while maintaining competitive recognition accuracy. Firstly, we formulate a Spiking Multimodal Fusion (SMF) module to effectively fuse multimodal skeleton data represented as spike-form features. Secondly, we propose the Self-Attention Spiking Graph Convolution (SA-SGC) module and the Spiking Temporal Convolution (STC) module, to capture spatial relationships and temporal dynamics of spike-form features. Finally, we propose an integrated knowledge distillation strategy to transfer information from the multimodal GCN to the SGN, incorporating both intermediate-layer distillation and soft-label distillation to enhance the performance of the SGN. MK-SGN exhibits substantial advantages, surpassing state-of-the-art GCN frameworks in energy efficiency and outperforming state-of-the-art SNN frameworks in recognition accuracy. The proposed method achieves a remarkable reduction in energy consumption, exceeding 98\% compared to conventional GCN-based approaches. This research establishes a robust baseline for developing high-performance, energy-efficient SNN-based models for skeleton-based action recognition

Naichuan Zheng, Hailun Xia, Zeyu Liang, Yuchen Du• 2024

Related benchmarks

TaskDatasetResultRank
Skeleton-based Action RecognitionNTU RGB+D (Cross-View)
Accuracy85.6
213
Skeleton-based Action RecognitionNTU RGB+D 120 Cross-Subject
Top-1 Accuracy67.8
143
Skeleton-based Action RecognitionNTU-RGB+D 120 (Cross-setup)
Accuracy69.5
136
Skeleton-based Action RecognitionNTU RGB+D (Cross-subject)
Accuracy78.5
123
Skeleton-based Action RecognitionNW-UCLA
Accuracy92.3
44
Showing 5 of 5 rows

Other info

Follow for update