Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking

About

Scale is the primary factor for building a powerful foundation model that could well generalize to a variety of downstream tasks. However, it is still challenging to train video foundation models with billions of parameters. This paper shows that video masked autoencoder (VideoMAE) is a scalable and general self-supervised pre-trainer for building video foundation models. We scale the VideoMAE in both model and data with a core design. Specifically, we present a dual masking strategy for efficient pre-training, with an encoder operating on a subset of video tokens and a decoder processing another subset of video tokens. Although VideoMAE is very efficient due to high masking ratio in encoder, masking decoder can still further reduce the overall computational cost. This enables the efficient pre-training of billion-level models in video. We also use a progressive training paradigm that involves an initial pre-training on a diverse multi-sourced unlabeled dataset, followed by a post-pre-training on a mixed labeled dataset. Finally, we successfully train a video ViT model with a billion parameters, which achieves a new state-of-the-art performance on the datasets of Kinetics (90.0% on K400 and 89.9% on K600) and Something-Something (68.7% on V1 and 77.0% on V2). In addition, we extensively verify the pre-trained video ViT models on a variety of downstream tasks, demonstrating its effectiveness as a general video representation learner. The code and model is available at \url{https://github.com/OpenGVLab/VideoMAEv2}.

Limin Wang, Bingkun Huang, Zhiyu Zhao, Zhan Tong, Yinan He, Yi Wang, Yali Wang, Yu Qiao• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc71.4
836
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy77
535
Action RecognitionKinetics-400
Top-1 Acc88.6
413
Action RecognitionUCF101
Accuracy99.6
365
Action RecognitionSomething-Something v2
Top-1 Accuracy77
341
Action RecognitionSomething-Something v2 (test)
Top-1 Acc77
333
Temporal Action DetectionTHUMOS-14 (test)--
330
Temporal Action LocalizationTHUMOS14 (test)
AP @ IoU=0.573
319
Action RecognitionKinetics 400 (test)
Top-1 Accuracy82.1
245
Action RecognitionHMDB51
Top-1 Acc34.1
225
Showing 10 of 58 rows

Other info

Follow for update