Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-supervised Knowledge Distillation for Few-shot Learning

About

Real-world contains an overwhelmingly large number of object classes, learning all of which at once is infeasible. Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples. Recent works [7, 41] show that simply learning a good feature embedding can outperform more sophisticated meta-learning and metric learning algorithms for few-shot learning. In this paper, we propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks. We follow a two-stage learning process: First, we train a neural network to maximize the entropy of the feature embedding, thus creating an optimal output manifold using a self-supervised auxiliary loss. In the second stage, we minimize the entropy on feature embedding by bringing self-supervised twins together, while constraining the manifold with student-teacher distillation. Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods, with further gains achieved by our second stage distillation process. Our codes are available at: https://github.com/brjathu/SKD.

Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah• 2020

Related benchmarks

TaskDatasetResultRank
Few-shot classificationMini-ImageNet--
175
Few-shot Image ClassificationtieredImageNet
Accuracy0.8666
90
5-way Few-shot Image ClassificationFC100 (test)
1-shot Accuracy46.5
78
5-way ClassificationtieredImageNet (test)
Accuracy86.5
66
5-way Few-shot Image ClassificationCIFAR FS (test)
1-shot Acc76.9
63
5-way Image ClassificationMini-Imagenet (test)
Top-1 Acc83.54
46
Showing 6 of 6 rows

Other info

Code

Follow for update