Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

About

Model-based deep learning has achieved astounding successes due in part to the availability of large-scale real-world data. However, processing such massive amounts of data comes at a considerable cost in terms of computations, storage, training and the search for good neural architectures. Dataset distillation has thus recently come to the fore. This paradigm involves distilling information from large real-world datasets into tiny and compact synthetic datasets such that processing the latter ideally yields similar performances as the former. State-of-the-art methods primarily rely on learning the synthetic dataset by matching the gradients obtained during training between the real and synthetic data. However, these gradient-matching methods suffer from the so-called accumulated trajectory error caused by the discrepancy between the distillation and subsequent evaluation. To mitigate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory. We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory. Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7% on a subset of images of the ImageNet dataset with higher resolution images. We also validate the effectiveness and generalizability of our method with datasets of different resolutions and demonstrate its applicability to neural architecture search. Code is available at https://github.com/AngusDujw/FTD-distillation.

Jiawei Du, Yidi Jiang, Vincent Y. F. Tan, Joey Tianyi Zhou, Haizhou Li• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy48.5
3518
Image ClassificationCIFAR-10 (test)
Accuracy73.8
3381
ClassificationCIFAR10 (test)
Accuracy73.8
266
ClassificationCIFAR-100 (test)
Accuracy50.7
129
Dataset DistillationCIFAR-10 (test)
Accuracy73.8
79
Dataset DistillationCIFAR-100 (test)
Accuracy50.7
52
Medical Image ClassificationCovid (test)
Accuracy86.96
43
Image ClassificationPathMNIST v2 (test)
Accuracy87.65
35
Image ClassificationTiny ImageNet 64x64 (test)
Accuracy24.5
27
Image ClassificationImageNette 128x128 (test)
Top-1 Acc67.7
16
Showing 10 of 21 rows

Other info

Code

Follow for update