Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Similarity-Preserving Knowledge Distillation

About

Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.

Frederick Tung, Greg Mori• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy74.56
3518
Image ClassificationImageNet-1k (val)--
1453
Person Re-IdentificationMarket 1501
mAP50.7
999
Image ClassificationCIFAR-100
Top-1 Accuracy78.33
622
Image ClassificationCIFAR100 (test)
Top-1 Accuracy75.34
377
Image ClassificationTinyImageNet (test)
Accuracy35.69
366
Image ClassificationSTL-10 (test)
Accuracy68.96
357
Image ClassificationImageNet (test)
Top-1 Acc71.08
235
Image ClassificationCIFAR100 (test)
Test Accuracy75.56
147
Image ClassificationCIFAR100
Average Accuracy73.83
121
Showing 10 of 34 rows

Other info

Follow for update