Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Focal and Global Spatial-Temporal Transformer for Skeleton-based Action Recognition

About

Despite great progress achieved by transformer in various vision tasks, it is still underexplored for skeleton-based action recognition with only a few attempts. Besides, these methods directly calculate the pair-wise global self-attention equally for all the joints in both the spatial and temporal dimensions, undervaluing the effect of discriminative local joints and the short-range temporal dynamics. In this work, we propose a novel Focal and Global Spatial-Temporal Transformer network (FG-STFormer), that is equipped with two key components: (1) FG-SFormer: focal joints and global parts coupling spatial transformer. It forces the network to focus on modelling correlations for both the learned discriminative spatial joints and human body parts respectively. The selective focal joints eliminate the negative effect of non-informative ones during accumulating the correlations. Meanwhile, the interactions between the focal joints and body parts are incorporated to enhance the spatial dependencies via mutual cross-attention. (2) FG-TFormer: focal and global temporal transformer. Dilated temporal convolution is integrated into the global self-attention mechanism to explicitly capture the local temporal motion patterns of joints or body parts, which is found to be vital important to make temporal transformer work. Extensive experimental results on three benchmarks, namely NTU-60, NTU-120 and NW-UCLA, show our FG-STFormer surpasses all existing transformer-based methods, and compares favourably with state-of-the art GCN-based methods.

Zhimin Gao, Peitao Wang, Pei Lv, Xiaoheng Jiang, Qidong Liu, Pichao Wang, Mingliang Xu, Wanqing Li• 2022

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 120 (X-set)
Accuracy90.6
661
Action RecognitionNTU RGB+D 60 (Cross-View)
Accuracy96.7
575
Action RecognitionNTU RGB-D Cross-Subject 60
Accuracy92.6
305
Action RecognitionNTU RGB+D 120 Cross-Subject
Accuracy89
183
Skeleton-based Action RecognitionNW-UCLA--
44
Skeleton-based Action RecognitionNTU RGB+D X-Sub60
Top-1 Acc (E1)92.6
16
Skeleton-based Action RecognitionNTU RGB+D X-View60
Top-1 Accuracy (E1)96.7
15
Skeleton-based Action RecognitionNTU RGB+D X-sub 120
Top-1 Acc (E1)89
13
Skeleton-based Action RecognitionNTU RGB+D X-Set120--
8
Showing 9 of 9 rows

Other info

Follow for update