Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
About
Most approaches in few-shot learning rely on costly annotated data related to the goal task domain during (pre-)training. Recently, unsupervised meta-learning methods have exchanged the annotation requirement for a reduction in few-shot classification performance. Simultaneously, in settings with realistic domain shift, common transfer learning has been shown to outperform supervised meta-learning. Building on these insights and on advances in self-supervised learning, we propose a transfer learning approach which constructs a metric embedding that clusters unlabeled prototypical samples and their augmentations closely together. This pre-trained embedding is a starting point for few-shot classification by summarizing class clusters and fine-tuning. We demonstrate that our self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset. In few-shot experiments with domain shift, our approach even has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Few-shot Image Classification | Mini-Imagenet (test) | -- | 235 | |
| Image Classification | MiniImagenet | Accuracy63.01 | 206 | |
| Few-shot classification | Mini-Imagenet (test) | -- | 113 | |
| Few-shot Image Classification | miniImageNet (test) | -- | 111 | |
| Few-shot classification | Omniglot (test) | Accuracy99.08 | 109 | |
| Few-shot classification | MiniImagenet | 5-way 5-shot Accuracy62.99 | 98 | |
| 5-way Classification | EuroSAT | -- | 51 | |
| Few-shot Image Classification | CropDiseases CDFSL (test) | -- | 45 | |
| Cross-domain few-shot classification | CD-FSL benchmark | -- | 33 | |
| N-way K-shot classification | Mini-Imagenet (test) | Accuracy (5-way, 1-shot)45.67 | 26 |