Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation
About
The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 5-way Classification | miniImageNet (test) | Accuracy49.18 | 231 | |
| Image Classification | MiniImagenet | Accuracy49.18 | 206 | |
| Few-shot classification | Mini-Imagenet (test) | -- | 113 | |
| Few-shot Image Classification | miniImageNet (test) | -- | 111 | |
| Few-shot classification | Omniglot (test) | Accuracy97.96 | 109 | |
| Few-shot classification | Mini-Imagenet 5-way 5-shot | Accuracy49.18 | 87 | |
| Few-shot classification | Mini-ImageNet 1-shot 5-way (test) | Accuracy37.67 | 82 | |
| Few-shot classification | Omniglot 20-way 5-shot (test) | Accuracy88.32 | 43 | |
| Few-shot classification | Omniglot 20-way 1-shot (test) | Accuracy70.21 | 43 | |
| 5-Shot 5-Way Classification | miniImageNet (test) | Accuracy49.18 | 36 |