Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Few-Shot Image Classification via Contrastive Self-Supervised Learning

About

Most previous few-shot learning algorithms are based on meta-training with fake few-shot tasks as training samples, where large labeled base classes are required. The trained model is also limited by the type of tasks. In this paper we propose a new paradigm of unsupervised few-shot learning to repair the deficiencies. We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning and training a classifier using graph aggregation, self-distillation and manifold augmentation. Once meta-trained, the model can be used in any type of tasks with a task-dependent classifier training. Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets, with an 8- 28% increase compared to the available unsupervised few-shot learning methods.

Jianyi Li, Guizhong Liu• 2020

Related benchmarks

TaskDatasetResultRank
5-way ClassificationminiImageNet (test)--
231
Image ClassificationMiniImagenet
Accuracy63.13
206
Image ClassificationMini-Imagenet (test)
Acc (5-shot)68.91
75
5-Shot 5-Way ClassificationminiImageNet (test)
Accuracy68.91
36
5-way 1-shot Image ClassificationminiImageNet standard (test)
Accuracy54.17
12
Showing 5 of 5 rows

Other info

Follow for update