Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Spatial Temporal Transformer Network for Skeleton-based Action Recognition

About

Skeleton-based human action recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views, and complex background. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self-Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two-stream network which outperforms state-of-the-art models using the same input data on both NTU-RGB+D 60 and NTU-RGB+D 120.

Chiara Plizzari, Marco Cannici, Matteo Matteucci• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 120 (X-set)
Accuracy84.7
717
Action RecognitionNTU RGB+D (Cross-View)
Accuracy96.1
652
Action RecognitionNTU RGB+D 60 (Cross-View)
Accuracy96.1
588
Action RecognitionNTU RGB+D (Cross-subject)
Accuracy89.9
500
Action RecognitionNTU RGB+D 60 (X-sub)
Accuracy89.9
467
Action RecognitionNTU RGB+D X-sub 120
Accuracy82.7
430
Action RecognitionNTU RGB+D 120 Cross-Subject
Accuracy89.4
222
Action RecognitionNTU RGB+D 120 (Cross-View)
Accuracy95.7
61
Showing 8 of 8 rows

Other info

Follow for update