Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VidTr: Video Transformer Without Convolutions

About

We introduce Video Transformer (VidTr) with separable-attention for video classification. Comparing with commonly used 3D networks, VidTr is able to aggregate spatio-temporal information via stacked attentions and provide better performance with higher efficiency. We first introduce the vanilla video transformer and show that transformer module is able to perform spatio-temporal modeling from raw pixels, but with heavy memory usage. We then present VidTr which reduces the memory cost by 3.3$\times$ while keeping the same performance. To further optimize the model, we propose the standard deviation based topK pooling for attention ($pool_{topK\_std}$), which reduces the computation by dropping non-informative features along temporal dimension. VidTr achieves state-of-the-art performance on five commonly used datasets with lower computational requirement, showing both the efficiency and effectiveness of our design. Finally, error analysis and visualization show that VidTr is especially good at predicting actions that require long-term temporal reasoning.

Yanyi Zhang, Xinyu Li, Chunhui Liu, Bing Shuai, Yi Zhu, Biagio Brattoli, Hao Chen, Ivan Marsic, Joseph Tighe• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics-400
Top-1 Acc80.5
413
Action RecognitionSomething-Something v2
Top-1 Accuracy63
341
Action RecognitionSomething-Something v2 (test)
Top-1 Acc60.2
333
Video Action RecognitionKinetics-400
Top-1 Acc79.1
184
Video Action ClassificationSomething-Something v2
Top-1 Acc63
139
Action RecognitionKinetics-400 full (val)
Top-1 Acc79.1
136
Action RecognitionKinetics 700
Top-1 Accuracy70.8
68
Action ClassificationKinetics 400 (val)
Top-1 Accuracy80.5
63
Video RecognitionSS v2
Top-1 Acc63
47
Video ClassificationKinetics 700
Top-1 Accuracy70.8
46
Showing 10 of 17 rows

Other info

Code

Follow for update