Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Active Data Curation Effectively Distills Large-Scale Multimodal Models

About

Knowledge distillation (KD) is the de facto standard for compressing large-scale models into smaller ones. Prior works have explored ever more complex KD strategies involving different objective functions, teacher-ensembles, and weight inheritance. In this work we explore an alternative, yet simple approach -- active data curation as effective distillation for contrastive multimodal pretraining. Our simple online batch selection method, ACID, outperforms strong KD baselines across various model-, data- and compute-configurations. Further, we find such an active data curation strategy to in fact be complementary to standard KD, and can be effectively combined to train highly performant inference-efficient models. Our simple and scalable pretraining framework, ACED, achieves state-of-the-art results across 27 zero-shot classification and retrieval tasks with upto 11% less inference FLOPs. We further demonstrate that our ACED models yield strong vision-encoders for training generative multimodal models in the LiT-Decoder setting, outperforming larger vision encoders for image-captioning and visual question-answering tasks.

Vishaal Udandarao, Nikhil Parthasarathy, Muhammad Ferjad Naeem, Talfan Evans, Samuel Albanie, Federico Tombari, Yongqin Xian, Alessio Tonioni, Olivier J. H\'enaff• 2024

Related benchmarks

TaskDatasetResultRank
ClassificationImageNet shift
Accuracy70.7
22
ClassificationObject-Centric datasets
Accuracy82.3
21
ClassificationScene-Centric Datasets
Accuracy64.6
21
Image-Text RetrievalCOCO
Retrieval Score58.3
21
Zero-shot EvaluationStableEval 27 evals
Average Performance70.9
21
Showing 5 of 5 rows

Other info

Code

Follow for update