Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model

About

We study how to train a student deep neural network for visual recognition by distilling knowledge from a blackbox teacher model in a data-efficient manner. Progress on this problem can significantly reduce the dependence on large-scale datasets for learning high-performing visual recognition models. There are two major challenges. One is that the number of queries into the teacher model should be minimized to save computational and/or financial costs. The other is that the number of images used for the knowledge distillation should be small; otherwise, it violates our expectation of reducing the dependence on large-scale datasets. To tackle these challenges, we propose an approach that blends mixup and active learning. The former effectively augments the few unlabeled images by a big pool of synthetic images sampled from the convex hull of the original images, and the latter actively chooses from the pool hard examples for the student neural network and query their labels from the teacher model. We validate our approach with extensive experiments.

Dongdong Wang, Yandong Li, Liqiang Wang, Boqing Gong• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationMNIST (test)
Accuracy99.47
882
Image ClassificationImageNet-1k (val)
Top-1 Accuracy56.92
840
Image ClassificationImageNet-1K
Top-1 Acc56.92
836
Image ClassificationCIFAR-100
Top-1 Accuracy65.58
622
Image ClassificationCIFAR-10--
507
Image ClassificationMNIST--
395
Image ClassificationTinyImageNet (test)--
366
Image ClassificationTiny-ImageNet
Top-1 Accuracy51.54
143
Image ClassificationSVHN (test)
Top-1 Accuracy86.7
26
Showing 10 of 10 rows

Other info

Follow for update