Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MLLMs-Augmented Visual-Language Representation Learning

About

Visual-language pre-training has achieved remarkable success in many multi-modal tasks, largely attributed to the availability of large-scale image-text datasets. In this work, we demonstrate that Multi-modal Large Language Models (MLLMs) can enhance visual-language representation learning by establishing richer image-text associations for image-text datasets. Our approach is simple, utilizing MLLMs to extend multiple diverse captions for each image. To prevent the bias introduced by MLLMs' hallucinations and monotonous language styles, we propose "text shearing" to maintain the quality and availability of extended captions. In image-text retrieval, without introducing additional training cost, our method consistently obtains 5.6 ~ 35.0 and 16.8 ~ 46.1 improvement on Recall@1 under the fine-tuning and zero-shot settings, respectively. Notably, we obtain zero-shot results that are comparable to fine-tuning on target datasets, which encourages more exploration of the versatile use of MLLMs.

Yanqing Liu, Kai Wang, Wenqi Shao, Ping Luo, Yu Qiao, Mike Zheng Shou, Kaipeng Zhang, Yang You• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationFood-101--
542
Image ClassificationDTD--
542
Image ClassificationCIFAR-10--
507
Image ClassificationSUN397--
425
Image ClassificationImageNet
Top-1 Accuracy25
366
Image ClassificationCaltech-101
Top-1 Accuracy63.5
152
Image ClassificationFlowers
Top-1 Acc17.5
101
Image ClassificationAircraft
Top-1 Acc1.5
57
Image ClassificationPets
Top-1 Accuracy32.1
41
Image ClassificationZero-shot Evaluation Suite (Food-101, CIFAR-10, CIFAR-100, SUN397, Stanford Cars, FGVC Aircraft, DTD, Oxford-IIIT Pets, Caltech-101, Flowers102, ImageNet-1K) various (test)
Food-101 Top-1 Acc60.9
29
Showing 10 of 14 rows

Other info

Follow for update