Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters

About

We propose a novel framework for filtering image-text data by leveraging fine-tuned Multimodal Language Models (MLMs). Our approach outperforms predominant filtering methods (e.g., CLIPScore) via integrating the recent advances in MLMs. We design four distinct yet complementary metrics to holistically measure the quality of image-text data. A new pipeline is established to construct high-quality instruction data for fine-tuning MLMs as data filters. Comparing with CLIPScore, our MLM filters produce more precise and comprehensive scores that directly improve the quality of filtered data and boost the performance of pre-trained models. We achieve significant improvements over CLIPScore on popular foundation models (i.e., CLIP and BLIP2) and various downstream tasks. Our MLM filter can generalize to different models and tasks, and be used as a drop-in replacement for CLIPScore. An additional ablation study is provided to verify our design choices for the MLM filter.

Weizhi Wang, Khalil Mrini, Linjie Yang, Sateesh Kumar, Yu Tian, Xifeng Yan, Heng Wang• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K--
524
Image ClassificationImageNet and Distribution Shifts--
49
Image ClassificationVTAB
Overall Accuracy36
24
Image-Text RetrievalRetrieval
Avg Recall29
11
Multi-modal Representation LearningDataComp medium Evaluation Suite
Average Score34.5
9
Showing 5 of 5 rows

Other info

Follow for update