Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ASMa: Asymmetric Spatio-temporal Masking for Skeleton Action Representation Learning

About

Self-supervised learning (SSL) has shown remarkable success in skeleton-based action recognition by leveraging data augmentations to learn meaningful representations. However, existing SSL methods rely on data augmentations that predominantly focus on masking high-motion frames and high-degree joints such as joints with degree 3 or 4. This results in biased and incomplete feature representations that struggle to generalize across varied motion patterns. To address this, we propose Asymmetric Spatio-temporal Masking (ASMa) for Skeleton Action Representation Learning, a novel combination of masking to learn a full spectrum of spatio-temporal dynamics inherent in human actions. ASMa employs two complementary masking strategies: one that selectively masks high-degree joints and low-motion, and another that masks low-degree joints and high-motion frames. These masking strategies ensure a more balanced and comprehensive skeleton representation learning. Furthermore, we introduce a learnable feature alignment module to effectively align the representations learned from both masked views. To facilitate deployment in resource-constrained settings and on low-resource devices, we compress the learned and aligned representation into a lightweight model using knowledge distillation. Extensive experiments on NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD datasets demonstrate that our approach outperforms existing SSL methods with an average improvement of 2.7-4.4% in fine-tuning and up to 5.9% in transfer learning to noisy datasets and achieves competitive performance compared to fully supervised baselines. Our distilled model achieves 91.4% parameter reduction and 3x faster inference on edge devices while maintaining competitive accuracy, enabling practical deployment in resource-constrained scenarios.

Aman Anand, Amir Eskandari, Elyas Rahsno, Farhana Zulkernine• 2026

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 120 (X-set)
Accuracy88.8
661
Action RecognitionNTU RGB+D 60 (Cross-View)
Accuracy96.8
575
Action RecognitionNTU RGB+D 60 (X-sub)
Accuracy92
467
Action RecognitionNTU RGB+D X-sub 120
Accuracy87.9
377
Action RecognitionPKU-MMD Part I
Accuracy94.5
53
Action RecognitionPKU-MMD (Part II)
Accuracy76.8
52
Skeleton Action RecognitionPKU Part II (test)
Accuracy77.2
21
Showing 7 of 7 rows

Other info

Follow for update