Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

About

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation. We address the conjecture that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained foundation models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard data augmentation techniques. We provide a theoretical framework for the use of a robust teacher in the knowledge distillation with data augmentation setting and demonstrate strong gains in out-of-distribution robustness and clean accuracy across different student architectures. Notably, our method adds minor computational overhead compared to similar techniques and can be easily combined with other data augmentations for further improvements.

Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Accuracy79.8
844
Image ClassificationImageNet A
Top-1 Acc40.3
654
Image ClassificationImageNet-R
Top-1 Acc72.1
529
Image ClassificationImageNet-Sketch
Top-1 Accuracy51.2
407
Image ClassificationImageNet-C
mCE52
115
Image ClassificationImageNet Matched Frequency V2
Top-1 Acc70.9
92
Image ClassificationStylized-ImageNet
Top-1 Accuracy23.4
89
Image ClassificationImageNet-C 1.0 (test)--
53
Image ClassificationStylized-ImageNet (test)
Accuracy22.6
21
Image ClassificationImageNet and ImageNet-V2
ImageNet Accuracy81.9
17
Showing 10 of 11 rows

Other info

Code

Follow for update