On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm
About
Contemporary machine learning requires training large neural networks on massive datasets and thus faces the challenges of high computational demands. Dataset distillation, as a recent emerging strategy, aims to compress real-world datasets for efficient training. However, this line of research currently struggle with large-scale and high-resolution datasets, hindering its practicality and feasibility. To this end, we re-examine the existing dataset distillation methods and identify three properties required for large-scale real-world applications, namely, realism, diversity, and efficiency. As a remedy, we propose RDED, a novel computationally-efficient yet effective data distillation paradigm, to enable both diversity and realism of the distilled data. Extensive empirical results over various neural architectures and datasets demonstrate the advancement of RDED: we can distill the full ImageNet-1K to a small dataset comprising 10 images per class within 7 minutes, achieving a notable 42% top-1 accuracy with ResNet-18 on a single RTX-4090 GPU (while the SOTA only achieves 21% but requires 6 hours).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy62.6 | 3518 | |
| Image Classification | CIFAR-10 (test) | Accuracy68.4 | 3381 | |
| Image Classification | ImageNet-1K | Top-1 Acc65.4 | 836 | |
| Image Classification | CIFAR-100 | Top-1 Accuracy64 | 622 | |
| Image Classification | Tiny ImageNet (test) | Accuracy47.6 | 265 | |
| Image Classification | Tiny-ImageNet | Accuracy58.2 | 227 | |
| Image Classification | ImageNet-1k (val) | Accuracy56.5 | 189 | |
| Image Classification | ImageNet-1K | Top-1 Accuracy62 | 137 | |
| Image Classification | CIFAR-10 | Top-1 Accuracy62.1 | 124 | |
| Dataset Distillation | ImageNet-1k (val) | Accuracy62.8 | 64 |