Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dataset Condensation with Differentiable Siamese Augmentation

About

In many machine learning problems, large-scale datasets have become the de-facto standard to train state-of-the-art deep networks at the price of heavy computation load. In this paper, we focus on condensing large training sets into significantly smaller synthetic sets which can be used to train deep neural networks from scratch with minimum drop in performance. Inspired from the recent training set synthesis methods, we propose Differentiable Siamese Augmentation that enables effective use of data augmentation to synthesize more informative synthetic images and thus achieves better performance when training networks with augmentations. Experiments on multiple image classification benchmarks demonstrate that the proposed method obtains substantial gains over the state-of-the-art, 7% improvements on CIFAR10 and CIFAR100 datasets. We show with only less than 1% data that our method achieves 99.6%, 94.9%, 88.5%, 71.5% relative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively. We also explore the use of our method in continual learning and neural architecture search, and show promising results.

Bo Zhao, Hakan Bilen• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy42.8
3518
Image ClassificationCIFAR-10 (test)
Accuracy64.2
3381
Image ClassificationMNIST (test)
Accuracy99.2
894
Image ClassificationCIFAR-100
Accuracy42.8
691
Image ClassificationFashion MNIST (test)
Accuracy88.7
592
Image ClassificationCIFAR-10
Accuracy60.6
508
Image ClassificationCIFAR-10
Accuracy60.6
507
Image ClassificationCIFAR-100
Accuracy42.8
435
Image ClassificationSVHN (test)
Accuracy84.4
401
Image ClassificationTiny ImageNet (test)
Accuracy6.6
362
Showing 10 of 44 rows

Other info

Follow for update