Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Attention Network for Skeleton-based Human Action Recognition

About

Skeleton-based action recognition has recently attracted a lot of attention. Researchers are coming up with new approaches for extracting spatio-temporal relations and making considerable progress on large-scale skeleton-based datasets. Most of the architectures being proposed are based upon recurrent neural networks (RNNs), convolutional neural networks (CNNs) and graph-based CNNs. When it comes to skeleton-based action recognition, the importance of long term contextual information is central which is not captured by the current architectures. In order to come up with a better representation and capturing of long term spatio-temporal relationships, we propose three variants of Self-Attention Network (SAN), namely, SAN-V1, SAN-V2 and SAN-V3. Our SAN variants has the impressive capability of extracting high-level semantics by capturing long-range correlations. We have also integrated the Temporal Segment Network (TSN) with our SAN variants which resulted in improved overall performance. Different configurations of Self-Attention Network (SAN) variants and Temporal Segment Network (TSN) are explored with extensive experiments. Our chosen configuration outperforms state-of-the-art Top-1 and Top-5 by 4.4% and 7.9% respectively on Kinetics and shows consistently better performance than state-of-the-art methods on NTU RGB+D.

Sangwoo Cho, Muhammad Hasan Maqbool, Fei Liu, Hassan Foroosh• 2019

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D (Cross-View)
Accuracy92.7
609
Action RecognitionNTU RGB+D (Cross-subject)
Accuracy87.2
474
Skeleton-based Action RecognitionNTU 60 (X-sub)
Accuracy87.2
220
Skeleton-based Action RecognitionNTU 60 (X-view)
Accuracy92.7
119
Action RecognitionKinetics
Top-1 Acc35.1
83
Skeleton-based Action RecognitionKinetics-skeleton v1 (val)
Top-1 Accuracy35.1
5
Showing 6 of 6 rows

Other info

Follow for update