Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ImageBERT: Cross-modal Pre-training with Large-scale Weak-supervised Image-Text Data

About

In this paper, we introduce a new vision-language pre-trained model -- ImageBERT -- for image-text joint embedding. Our model is a Transformer-based model, which takes different modalities as input and models the relationship between them. The model is pre-trained on four tasks simultaneously: Masked Language Modeling (MLM), Masked Object Classification (MOC), Masked Region Feature Regression (MRFR), and Image Text Matching (ITM). To further enhance the pre-training quality, we have collected a Large-scale weAk-supervised Image-Text (LAIT) dataset from Web. We first pre-train the model on this dataset, then conduct a second stage pre-training on Conceptual Captions and SBU Captions. Our experiments show that multi-stage pre-training strategy outperforms single-stage pre-training. We also fine-tune and evaluate our pre-trained ImageBERT model on image retrieval and text retrieval tasks, and achieve new state-of-the-art results on both MSCOCO and Flickr30k datasets.

Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, Arun Sacheti• 2020

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30K
R@173.1
460
Image-to-Text RetrievalFlickr30K 1K (test)
R@187
439
Image-to-Text RetrievalFlickr30K
R@187
379
Text-to-Image RetrievalFlickr30K 1K (test)
R@154.3
375
Image-to-Text RetrievalMS-COCO 5K (test)
R@144
299
Text-to-Image RetrievalMSCOCO 5K (test)
R@150.5
286
Text-to-Image RetrievalMS-COCO 5K (test)
R@132.3
223
Image RetrievalMS-COCO 5K (test)
R@150.5
217
Image RetrievalFlickr30k (test)
R@176.7
195
Image RetrievalFlickr30K
R@154.3
144
Showing 10 of 31 rows

Other info

Follow for update