Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Shifted Chunk Transformer for Spatio-Temporal Representational Learning

About

Spatio-temporal representational learning has been widely adopted in various fields such as action recognition, video object segmentation, and action anticipation. Previous spatio-temporal representational learning approaches primarily employ ConvNets or sequential models,e.g., LSTM, to learn the intra-frame and inter-frame features. Recently, Transformer models have successfully dominated the study of natural language processing (NLP), image classification, etc. However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch. To tackle the training difficulty and enhance the spatio-temporal learning, we construct a shifted chunk Transformer with pure self-attention blocks. Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global video clip. Our shifted self-attention can also effectively model complicated inter-frame variances. Furthermore, we build a clip encoder based on Transformer to model long-term temporal dependencies. We conduct thorough ablation studies to validate each component and hyper-parameters in our shifted chunk Transformer, and it outperforms previous state-of-the-art approaches on Kinetics-400, Kinetics-600, UCF101, and HMDB51.

Xuefan Zha, Wentao Zhu, Tingxun Lv, Sen Yang, Ji Liu• 2021

Related benchmarks

TaskDatasetResultRank
Video ClassificationKinetics-400
Top-1 Acc83
131
Video Action RecognitionHMDB-51 (3 splits)
Accuracy84.6
116
Video ClassificationKinetics-600
Top-1 Accuracy84.3
84
Video ClassificationUCF101 (averaged over three splits)
Accuracy98.7
39
Video ClassificationMoments in Time (test)
Top-1 Accuracy37.3
5
Showing 5 of 5 rows

Other info

Follow for update