Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge Distillation via the Target-aware Transformer

About

Knowledge distillation becomes a de facto standard to improve the performance of small neural networks. Most of the previous works propose to regress the representational features from the teacher to the student in a one-to-one spatial matching fashion. However, people tend to overlook the fact that, due to the architecture differences, the semantic information on the same spatial location usually vary. This greatly undermines the underlying assumption of the one-to-one distillation approach. To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. Specifically, we allow each pixel of the teacher feature to be distilled to all spatial locations of the student features given its similarity, which is generated from a target-aware transformer. Our approach surpasses the state-of-the-art methods by a significant margin on various computer vision benchmarks, such as ImageNet, Pascal VOC and COCOStuff10k. Code is available at https://github.com/sihaoevery/TaT.

Sihao Lin, Hongwei Xie, Bing Wang, Kaicheng Yu, Xiaojun Chang, Xiaodan Liang, Gang Wang• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet (val)--
300
Image ClassificationCIFAR100
Average Accuracy76.06
121
Semantic segmentationCOCO-Stuff 10K
mIoU28.75
16
Showing 3 of 3 rows

Other info

Code

Follow for update