Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation

About

The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet.

Antreas Antoniou, Amos Storkey• 2019

Related benchmarks

TaskDatasetResultRank
5-way ClassificationminiImageNet (test)
Accuracy49.18
231
Image ClassificationMiniImagenet
Accuracy49.18
206
Few-shot classificationMini-Imagenet (test)--
113
Few-shot Image ClassificationminiImageNet (test)--
111
Few-shot classificationOmniglot (test)
Accuracy97.96
109
Few-shot classificationMini-Imagenet 5-way 5-shot
Accuracy49.18
87
Few-shot classificationMini-ImageNet 1-shot 5-way (test)
Accuracy37.67
82
Few-shot classificationOmniglot 20-way 5-shot (test)
Accuracy88.32
43
Few-shot classificationOmniglot 20-way 1-shot (test)
Accuracy70.21
43
5-Shot 5-Way ClassificationminiImageNet (test)
Accuracy49.18
36
Showing 10 of 16 rows

Other info

Follow for update