Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond Dataset Distillation: Lossless Dataset Concentration via Diffusion-Assisted Distribution Alignment

About

The high cost and accessibility problem associated with large datasets hinder the development of large-scale visual recognition systems. Dataset Distillation addresses these problems by synthesizing compact surrogate datasets for efficient training, storage, transfer, and privacy preservation. The existing state-of-the-art diffusion-based dataset distillation methods face three issues: lack of theoretical justification, poor efficiency in scaling to high data volumes, and failure in data-free scenarios. To address these issues, we establish a theoretical framework that justifies the use of diffusion models by proving the equivalence between dataset distillation and distribution matching, and reveals an inherent efficiency limit in the dataset distillation paradigm. We then propose a Dataset Concentration (DsCo) framework that uses a diffusion-based Noise-Optimization (NOpt) method to synthesize a small yet representative set of samples, and optionally augments the synthetic data via "Doping", which mixes selected samples from the original dataset with the synthetic samples to overcome the efficiency limit of dataset distillation. DsCo is applicable in both data-accessible and data-free scenarios, achieving SOTA performances for low data volumes, and it extends well to high data volumes, where it nearly reduces the dataset size by half with no performance degradation.

Tongfei Liu, Yufan Liu, Bing Li, Weiming Hu• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageWoof (val)
Accuracy91.2
105
Image ClassificationImageNet-1K
Accuracy63
92
Image ClassificationImageNette (val)
Accuracy0.98
63
Image ClassificationImageNet-1k (val)
DsCo Accuracy70.1
5
Showing 4 of 4 rows

Other info

Follow for update