Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Action Capsules: Human Skeleton Action Recognition

About

Due to the compact and rich high-level representations offered, skeleton-based human action recognition has recently become a highly active research topic. Previous studies have demonstrated that investigating joint relationships in spatial and temporal dimensions provides effective information critical to action recognition. However, effectively encoding global dependencies of joints during spatio-temporal feature extraction is still challenging. In this paper, we introduce Action Capsule which identifies action-related key joints by considering the latent correlation of joints in a skeleton sequence. We show that, during inference, our end-to-end network pays attention to a set of joints specific to each action, whose encoded spatio-temporal features are aggregated to recognize the action. Additionally, the use of multiple stages of action capsules enhances the ability of the network to classify similar actions. Consequently, our network outperforms the state-of-the-art approaches on the N-UCLA dataset and obtains competitive results on the NTURGBD dataset. This is while our approach has significantly lower computational requirements based on GFLOPs measurements.

Ali Farajzadeh Bavil, Hamed Damirchi, Hamid D. Taghirad• 2023

Related benchmarks

TaskDatasetResultRank
Skeleton-based Action RecognitionNTU RGB+D (Cross-View)
Accuracy96.3
213
Skeleton-based Action RecognitionNTU RGB+D (Cross-subject)
Accuracy90
123
Skeleton-based Action RecognitionNorthwestern-UCLA (N-UCLA) third camera (test)
Top-1 Acc97.3
7
Action RecognitionNTU RGB+D
GFLOPs3.48
4
Action RecognitionNorth-UCLA (N-UCLA)
GFLOPs0.6
4
Showing 5 of 5 rows

Other info

Follow for update