Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Modality Co-Learning for Efficient Skeleton-based Action Recognition

About

Skeleton-based action recognition has garnered significant attention due to the utilization of concise and resilient skeletons. Nevertheless, the absence of detailed body information in skeletons restricts performance, while other multimodal methods require substantial inference resources and are inefficient when using multimodal data during both training and inference stages. To address this and fully harness the complementary multimodal features, we propose a novel multi-modality co-learning (MMCL) framework by leveraging the multimodal large language models (LLMs) as auxiliary networks for efficient skeleton-based action recognition, which engages in multi-modality co-learning during the training stage and keeps efficiency by employing only concise skeletons in inference. Our MMCL framework primarily consists of two modules. First, the Feature Alignment Module (FAM) extracts rich RGB features from video frames and aligns them with global skeleton features via contrastive learning. Second, the Feature Refinement Module (FRM) uses RGB images with temporal information and text instruction to generate instructive features based on the powerful generalization of multimodal LLMs. These instructive text features will further refine the classification scores and the refined scores will enhance the model's robustness and generalization in a manner similar to soft labels. Extensive experiments on NTU RGB+D, NTU RGB+D 120 and Northwestern-UCLA benchmarks consistently verify the effectiveness of our MMCL, which outperforms the existing skeleton-based action recognition methods. Meanwhile, experiments on UTD-MHAD and SYSU-Action datasets demonstrate the commendable generalization of our MMCL in zero-shot and domain-adaptive action recognition. Our code is publicly available at: https://github.com/liujf69/MMCL-Action.

Jinfu Liu, Chen Chen, Mengyuan Liu• 2024

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 120 (X-set)
Accuracy91.7
661
Action RecognitionNTU RGB+D 60 (X-sub)
Accuracy93.5
467
Action RecognitionNTU RGB+D X-sub 120
Accuracy90.3
377
Action RecognitionNTU RGB+D X-View 60
Accuracy97.4
172
Action RecognitionNW-UCLA
Top-1 Acc97.5
67
Action RecognitionUTD-MHAD zero-shot
Top-1 Accuracy54.97
5
Action RecognitionSYSU-Action zero-shot
Top-1 Acc42.5
5
Showing 7 of 7 rows

Other info

Code

Follow for update