Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MVP: Multimodality-guided Visual Pre-training

About

Recently, masked image modeling (MIM) has become a promising direction for visual pre-training. In the context of vision transformers, MIM learns effective visual representation by aligning the token-level features with a pre-defined space (e.g., BEIT used a d-VAE trained on a large image corpus as the tokenizer). In this paper, we go one step further by introducing guidance from other modalities and validating that such additional knowledge leads to impressive gains for visual pre-training. The proposed approach is named Multimodality-guided Visual Pre-training (MVP), in which we replace the tokenizer with the vision branch of CLIP, a vision-language model pre-trained on 400 million image-text pairs. We demonstrate the effectiveness of MVP by performing standard experiments, i.e., pre-training the ViT models on ImageNet and fine-tuning them on a series of downstream visual recognition tasks. In particular, pre-training ViT-Base/16 for 300 epochs, MVP reports a 52.4% mIoU on ADE20K, surpassing BEIT (the baseline and previous state-of-the-art) with an impressive margin of 6.8%.

Longhui Wei, Lingxi Xie, Wengang Zhou, Houqiang Li, Qi Tian• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy75.4
1866
Semantic segmentationADE20K
mIoU52.4
936
Image ClassificationImageNet 1K (train val)
Top-1 Accuracy84.4
107
Image ClassificationImageNet-1K (fine-tuning)
Accuracy (FT)84.4
26
Showing 4 of 4 rows

Other info

Follow for update