Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer

About

Learning discriminative spatiotemporal representation is the key problem of video understanding. Recently, Vision Transformers (ViTs) have shown their power in learning long-term video dependency with self-attention. Unfortunately, they exhibit limitations in tackling local video redundancy, due to the blind global comparison among tokens. UniFormer has successfully alleviated this issue, by unifying convolution and self-attention as a relation aggregator in the transformer format. However, this model has to require a tiresome and complicated image-pretraining phrase, before being finetuned on videos. This blocks its wide usage in practice. On the contrary, open-sourced ViTs are readily available and well-pretrained with rich image supervision. Based on these observations, we propose a generic paradigm to build a powerful family of video networks, by arming the pretrained ViTs with efficient UniFormer designs. We call this family UniFormerV2, since it inherits the concise style of the UniFormer block. But it contains brand-new local and global relation aggregators, which allow for preferable accuracy-computation balance by seamlessly integrating advantages from both ViTs and UniFormer. Without any bells and whistles, our UniFormerV2 gets the state-of-the-art recognition performance on 8 popular video benchmarks, including scene-related Kinetics-400/600/700 and Moments in Time, temporal-related Something-Something V1/V2, untrimmed ActivityNet and HACS. In particular, it is the first model to achieve 90% top-1 accuracy on Kinetics-400, to our best knowledge. Code will be available at https://github.com/OpenGVLab/UniFormerV2.

Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Limin Wang, Yu Qiao• 2022

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy73
535
Action RecognitionKinetics-400
Top-1 Acc90
413
Action RecognitionSomething-Something v2
Top-1 Accuracy73
341
Action RecognitionSomething-Something v2 (test)
Top-1 Acc73
333
Action RecognitionSomething-something v1 (test)
Top-1 Accuracy62.7
189
Video ClassificationSomething-Something v2 (test)
Top-1 Acc0.731
169
Video Action RecognitionKinetics 400 (val)
Top-1 Acc90
151
Action RecognitionUCF-101
Top-1 Acc98.2
147
Video ClassificationSomething-something v1 (test)
Top-1 Accuracy62.9
115
Action RecognitionKinetics-400 1.0 (val)
Top-1 Accuracy90
110
Showing 10 of 24 rows

Other info

Follow for update