Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MLLMs-Augmented Visual-Language Representation Learning

About

Visual-language pre-training has achieved remarkable success in many multi-modal tasks, largely attributed to the availability of large-scale image-text datasets. In this work, we demonstrate that Multi-modal Large Language Models (MLLMs) can enhance visual-language representation learning by establishing richer image-text associations for image-text datasets. Our approach is simple, utilizing MLLMs to extend multiple diverse captions for each image. To prevent the bias introduced by MLLMs' hallucinations and monotonous language styles, we propose "text shearing" to maintain the quality and availability of extended captions. In image-text retrieval, without introducing additional training cost, our method consistently obtains 5.6 ~ 35.0 and 16.8 ~ 46.1 improvement on Recall@1 under the fine-tuning and zero-shot settings, respectively. Notably, we obtain zero-shot results that are comparable to fine-tuning on target datasets, which encourages more exploration of the versatile use of MLLMs.

Yanqing Liu, Kai Wang, Wenqi Shao, Ping Luo, Yu Qiao, Mike Zheng Shou, Kaipeng Zhang, Yang You• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10--
507
Image ClassificationFood-101--
494
Image ClassificationDTD--
487
Image ClassificationSUN397--
425
Image ClassificationImageNet
Top-1 Accuracy25
324
Image ClassificationCaltech-101
Top-1 Accuracy63.5
146
Image ClassificationFlowers
Top-1 Acc17.5
80
Image ClassificationAircraft
Top-1 Acc1.5
43
Image ClassificationZero-shot Evaluation Suite (Food-101, CIFAR-10, CIFAR-100, SUN397, Stanford Cars, FGVC Aircraft, DTD, Oxford-IIIT Pets, Caltech-101, Flowers102, ImageNet-1K) various (test)
Food-101 Top-1 Acc60.9
29
Image ClassificationPets
Top-1 Accuracy32.1
29
Showing 10 of 14 rows

Other info

Follow for update