Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Devil is in the Details: A Deep Dive into the Rabbit Hole of Data Filtering

About

The quality of pre-training data plays a critical role in the performance of foundation models. Popular foundation models often design their own recipe for data filtering, which makes it hard to analyze and compare different data filtering approaches. DataComp is a new benchmark dedicated to evaluating different methods for data filtering. This paper describes our learning and solution when participating in the DataComp challenge. Our filtering strategy includes three stages: single-modality filtering, cross-modality filtering, and data distribution alignment. We integrate existing methods and propose new solutions, such as computing CLIP score on horizontally flipped images to mitigate the interference of scene text, using vision and language models to retrieve training samples for target downstream tasks, rebalancing the data distribution to improve the efficiency of allocating the computational budget, etc. We slice and dice our design choices, provide in-depth analysis, and discuss open questions. Our approach outperforms the best method from the DataComp paper by over 4% on the average performance of 38 tasks and by over 2% on ImageNet.

Haichao Yu, Yu Tian, Sateesh Kumar, Linjie Yang, Heng Wang• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy32
798
Image ClassificationImageNet-1K--
524
Image ClassificationImageNet and Distribution Shifts--
49
Image ClassificationVTAB
Overall Accuracy35.9
24
Image ClassificationDataComp 38 downstream tasks medium (test)
Accuracy37.1
12
Image-Text RetrievalRetrieval
Avg Recall24.7
11
Multi-modal Representation LearningDataComp medium Evaluation Suite
Average Score34.5
9
Showing 7 of 7 rows

Other info

Follow for update