Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Weakly Supervised Vision-and-Language Pre-training with Relative Representations

About

Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks. However, current WVLP methods use only local descriptions of images, i.e., object tags, as cross-modal anchors to construct weakly-aligned image-text pairs for pre-training. This affects the data quality and thus the effectiveness of pre-training. In this paper, we propose to directly take a small number of aligned image-text pairs as anchors, and represent each unaligned image and text by its similarities to these anchors, i.e., relative representations. We build a WVLP framework based on the relative representations, namely RELIT, which collects high-quality weakly-aligned image-text pairs from large-scale image-only and text-only data for pre-training through relative representation-based retrieval and generation. Experiments on four downstream tasks show that RELIT achieves new state-of-the-art results under the weakly supervised setting.

Chi Chen, Peng Li, Maosong Sun, Yang Liu• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy73.6
664
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy76.4
327
Visual EntailmentSNLI-VE (test)
Overall Accuracy78.6
197
Image RetrievalFlickr30k (test)
R@170.2
195
Showing 4 of 4 rows

Other info

Code

Follow for update