Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What If We Recaption Billions of Web Images with LLaMA-3?

About

Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area remain predominantly closed-source. Our paper aims to bridge this community effort, leveraging the powerful and \textit{open-sourced} LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5 and then employ it to recaption 1.3 billion images from the DataComp-1B dataset. Our empirical results confirm that this enhanced dataset, Recap-DataComp-1B, offers substantial benefits in training advanced vision-language models. For discriminative models like CLIP, we observe enhanced zero-shot performance in cross-modal retrieval tasks. For generative models like text-to-image Diffusion Transformers, the generated images exhibit a significant improvement in alignment with users' text instructions, especially in following complex queries. Our project page is https://www.haqtu.me/Recap-Datacomp-1B/

Xianhang Li, Haoqin Tu, Mude Hui, Zeyu Wang, Bingchen Zhao, Junfei Xiao, Sucheng Ren, Jieru Mei, Qing Liu, Huangjie Zheng, Yuyin Zhou, Cihang Xie• 2024

Related benchmarks

TaskDatasetResultRank
Image-to-Text RetrievalUrban-1K--
34
Text-to-Image RetrievalUrban-1K--
34
Attribute UnderstandingVG-Attribute
Attribute Score66.8
6
Multimodal Understanding and ReasoningMMMU
Accuracy45.2
4
Multimodal Understanding and ReasoningMM-Vet
MM-Vet Score37.8
4
Showing 5 of 5 rows

Other info

Code

Follow for update