Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dataset Distillation by Automatic Training Trajectories

About

Dataset Distillation is used to create a concise, yet informative, synthetic dataset that can replace the original dataset for training purposes. Some leading methods in this domain prioritize long-range matching, involving the unrolling of training trajectories with a fixed number of steps (NS) on the synthetic dataset to align with various expert training trajectories. However, traditional long-range matching methods possess an overfitting-like problem, the fixed step size NS forces synthetic dataset to distortedly conform seen expert training trajectories, resulting in a loss of generality-especially to those from unencountered architecture. We refer to this as the Accumulated Mismatching Problem (AMP), and propose a new approach, Automatic Training Trajectories (ATT), which dynamically and adaptively adjusts trajectory length NS to address the AMP. Our method outperforms existing methods particularly in tests involving cross-architectures. Moreover, owing to its adaptive nature, it exhibits enhanced stability in the face of parameter variations.

Dai Liu, Jindong Gu, Hu Cao, Carsten Trinitis, Martin Schulz• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy51.2
3518
Image ClassificationCIFAR-10 (test)
Accuracy74.5
3381
ClassificationCIFAR10 (test)
Accuracy74.5
266
ClassificationCIFAR-100 (test)
Accuracy51.2
129
Medical Image ClassificationCovid (test)
Accuracy87.62
43
Image ClassificationPathMNIST v2 (test)
Accuracy88.41
35
Image ClassificationTiny ImageNet 64x64 (test)
Accuracy25.8
27
Showing 7 of 7 rows

Other info

Follow for update