Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Boosting Few-Shot Visual Learning with Self-Supervision

About

Few-shot learning and self-supervised learning address different facets of the same problem: how to train a model with little or no labeled data. Few-shot learning aims for optimization methods and models that can learn efficiently to recognize patterns in the low data regime. Self-supervised learning focuses instead on unlabeled data and looks into it for the supervisory signal to feed high capacity deep neural networks. In this work we exploit the complementarity of these two domains and propose an approach for improving few-shot learning through self-supervision. We use self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples. Through self-supervision, our approach can be naturally extended towards using diverse unlabeled data from other datasets in the few-shot setting. We report consistent improvements across an array of architectures, datasets and self-supervision techniques.

Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick P\'erez, Matthieu Cord• 2019

Related benchmarks

TaskDatasetResultRank
Few-shot classificationtieredImageNet (test)--
282
Few-shot Image ClassificationMini-Imagenet (test)
Accuracy79.87
235
5-way ClassificationminiImageNet (test)
Accuracy79.9
231
Boundary DetectionBSDS 500 (test)
ODS76.4
185
Few-shot classificationMini-ImageNet
1-shot Acc64
175
5-way Few-shot ClassificationMiniImagenet
Accuracy (5-shot)74.3
150
5-way Few-shot ClassificationMini-Imagenet (test)
1-shot Accuracy62.93
141
Few-shot classificationminiImageNet standard (test)
5-way 1-shot Acc62.93
138
5-way Image ClassificationtieredImageNet 5-way (test)
1-shot Acc70.53
117
Image ClassificationImageNet 1K Challenge (novel classes)
Top-5 Acc77.31
110
Showing 10 of 26 rows

Other info

Follow for update