Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Representation Distillation

About

Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.

Yonglong Tian, Dilip Krishnan, Phillip Isola• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy77.86
3518
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy71.4
1866
Image ClassificationImageNet-1k (val)
Top-1 Accuracy71.37
1453
Image ClassificationImageNet (val)
Top-1 Acc71.37
1206
Image ClassificationImageNet-1K
Top-1 Acc69.6
836
Image ClassificationCIFAR-100 (val)
Accuracy77.8
661
Image ClassificationCIFAR-100
Top-1 Accuracy75.51
622
Image ClassificationImageNet-1K--
524
Image ClassificationCIFAR-10
Accuracy93.18
507
Image ClassificationCIFAR100 (test)
Top-1 Accuracy76.04
377
Showing 10 of 63 rows

Other info

Code

Follow for update