Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multimodal Dataset Distillation Made Simple by Prototype-Guided Data Synthesis

About

Recent advances in multimodal learning have achieved remarkable success across diverse vision-language tasks. However, such progress heavily relies on large-scale image-text datasets, making training costly and inefficient. Prior efforts in dataset filtering and pruning attempt to mitigate this issue, but still require relatively large subsets to maintain performance and fail under very small subsets. Dataset distillation offers a promising alternative, yet existing multimodal dataset distillation methods require full-dataset training and joint optimization of image pixels and text features, making them architecture-dependent and limiting cross-architecture generalization. To overcome this, we propose a learning-free dataset distillation framework that eliminates the need for large-scale training and optimization while enhancing generalization across architectures. Our method uses CLIP to extract aligned image-text embeddings, obtains prototypes, and employs an unCLIP decoder to synthesize images, enabling efficient and scalable multimodal dataset distillation. Extensive experiments demonstrate that our approach consistently outperforms optimization-based dataset distillation and subset selection methods, achieving state-of-the-art cross-architecture generalization.

Junhyeok Choi, Sangwoo Mo, Minwoo Chae• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30k (test)
Recall@114.4
423
Image-to-Text RetrievalFlickr30k (test)
R@118.7
370
Image RetrievalFlickr30k (test)
R@19.9
195
Image-to-Text RetrievalMS-COCO (test)
R@17.4
99
Text RetrievalFlickr30k (test)
R@19.6
89
Text-to-Image RetrievalMS-COCO (test)
R@15.3
66
Showing 6 of 6 rows

Other info

Follow for update