Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers

About

Transformers are successfully applied to computer vision due to their powerful modeling capacity with self-attention. However, the excellent performance of transformers heavily depends on enormous training images. Thus, a data-efficient transformer solution is urgently needed. In this work, we propose an early knowledge distillation framework, which is termed as DearKD, to improve the data efficiency required by transformers. Our DearKD is a two-stage framework that first distills the inductive biases from the early intermediate layers of a CNN and then gives the transformer full play by training without distillation. Further, our DearKD can be readily applied to the extreme data-free case where no real images are available. In this case, we propose a boundary-preserving intra-divergence loss based on DeepInversion to further close the performance gap against the full-data counterpart. Extensive experiments on ImageNet, partial ImageNet, data-free setting and other downstream tasks prove the superiority of DearKD over its baselines and state-of-the-art methods.

Xianing Chen, Qiong Cao, Yujie Zhong, Jing Zhang, Shenghua Gao, Dacheng Tao• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Accuracy84.4
840
Image ClassificationCIFAR-100
Top-1 Accuracy91.1
622
Image ClassificationStanford Cars--
477
Image ClassificationImageNet
Top-1 Accuracy74
324
Image ClassificationCIFAR-100--
302
ClassificationImageNet 1k (test val)
Top-1 Accuracy82.8
138
Image ClassificationCIFAR-10
Top-1 Accuracy99.2
124
Image ClassificationFlowers
Accuracy97.4
83
Image ClassificationOxford Flowers
Top-1 Accuracy98.8
78
Image ClassificationImageNet-1k (val)
Top-1 Accuracy81.5
17
Showing 10 of 11 rows

Other info

Follow for update