Enhancing Few-Shot Image Classification with Unlabelled Examples
About
We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines a regularized Mahalanobis-distance-based soft k-means clustering procedure with a modified state of the art neural adaptive feature extractor to achieve improved test-time classification accuracy using unlabelled data. We evaluate our method on transductive few-shot learning tasks, in which the goal is to jointly predict labels for query (test) examples given a set of support (training) examples. We achieve state of the art performance on the Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks. All trained models and code have been made publicly available at github.com/plai-group/simple-cnaps.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Few-shot classification | tieredImageNet (test) | -- | 282 | |
| Few-shot Image Classification | Mini-Imagenet (test) | -- | 235 | |
| Image Classification | MiniImagenet | Accuracy73.1 | 206 | |
| Few-shot classification | Mini-ImageNet | 1-shot Acc79.9 | 175 | |
| Few-shot Image Classification | miniImageNet (test) | -- | 111 | |
| Few-shot classification | MiniImagenet | 5-way 5-shot Accuracy73.1 | 98 | |
| Few-shot Image Classification | tieredImageNet | -- | 90 | |
| Few-shot classification | Mini-Imagenet 5-way 5-shot | Accuracy91.5 | 87 | |
| Few-shot Image Classification | tieredImageNet (test) | -- | 86 | |
| Few-shot classification | Meta-Dataset | Avg Seen Accuracy75.1 | 45 |