Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Knowledge Diffusion for Distillation

About

The representation gap between teacher and student is an emerging topic in knowledge distillation (KD). To reduce the gap and improve the performance, current methods often resort to complicated training schemes, loss functions, and feature alignments, which are task-specific and feature-specific. In this paper, we state that the essence of these methods is to discard the noisy information and distill the valuable information in the feature, and propose a novel KD method dubbed DiffKD, to explicitly denoise and match features using diffusion models. Our approach is based on the observation that student features typically contain more noises than teacher features due to the smaller capacity of student model. To address this, we propose to denoise student features using a diffusion model trained by teacher features. This allows us to perform better distillation between the refined clean feature and teacher feature. Additionally, we introduce a light-weight diffusion model with a linear autoencoder to reduce the computation cost and an adaptive noise matching module to improve the denoising performance. Extensive experiments demonstrate that DiffKD is effective across various types of features and achieves state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at https://github.com/hunto/DiffKD.

Tao Huang, Yuan Zhang, Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Chang Xu• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy77.52
3518
Semantic segmentationADE20K (val)
mIoU34.85
2888
Object DetectionCOCO 2017 (val)
AP42.4
2643
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy73.62
1952
Semantic segmentationCityscapes (test)
mIoU76.24
1154
Image ClassificationCIFAR-100 (val)--
776
Object DetectionCOCO (val)
mAP42.5
633
Object DetectionMS-COCO (val)
mAP0.424
211
Image ClassificationImageNet--
184
Image ClassificationImageNet-1k (val)
Top-1 Acc73.27
26
Showing 10 of 11 rows

Other info

Code

Follow for update