Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

About

The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost. Few-shot learning is widely used as one of the standard benchmarks in meta-learning. In this work, we show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods. An additional boost can be achieved through the use of self-distillation. This demonstrates that using a good learned embedding model can be more effective than sophisticated meta-learning algorithms. We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms. Code is available at: http://github.com/WangYueFt/rfs/.

Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy55.9
798
Image ClassificationSTL-10 (test)
Accuracy95
357
Image ClassificationStanford Cars (test)
Accuracy70
306
Few-shot classificationtieredImageNet (test)
Accuracy84.41
282
Few-shot Image ClassificationMini-Imagenet (test)--
235
Image ClassificationFGVC-Aircraft (test)
Accuracy36
231
5-way ClassificationminiImageNet (test)--
231
Image ClassificationMiniImagenet
Accuracy79.64
206
Image ClassificationDTD (test)
Accuracy64.3
181
Few-shot classificationMini-ImageNet
1-shot Acc64.8
175
Showing 10 of 54 rows

Other info

Follow for update