Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model

About

We study how to train a student deep neural network for visual recognition by distilling knowledge from a blackbox teacher model in a data-efficient manner. Progress on this problem can significantly reduce the dependence on large-scale datasets for learning high-performing visual recognition models. There are two major challenges. One is that the number of queries into the teacher model should be minimized to save computational and/or financial costs. The other is that the number of images used for the knowledge distillation should be small; otherwise, it violates our expectation of reducing the dependence on large-scale datasets. To tackle these challenges, we propose an approach that blends mixup and active learning. The former effectively augments the few unlabeled images by a big pool of synthetic images sampled from the convex hull of the original images, and the latter actively chooses from the pool hard examples for the student neural network and query their labels from the teacher model. We validate our approach with extensive experiments.

Dongdong Wang, Yandong Li, Liqiang Wang, Boqing Gong• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationImageNet-1K
Top-1 Acc56.92
1239
Image ClassificationMNIST (test)
Accuracy99.47
894
Image ClassificationImageNet-1k (val)
Top-1 Accuracy56.92
844
Image ClassificationCIFAR-100--
691
Image ClassificationCIFAR-10--
507
Image ClassificationTinyImageNet (test)--
440
Image ClassificationMNIST--
417
Image ClassificationTiny-ImageNet
Top-1 Accuracy51.54
230
Image ClassificationSVHN (test)
Top-1 Accuracy86.7
26
Showing 10 of 10 rows

Other info

Follow for update