Low-shot learning with large-scale diffusion
About
This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time. This setup is often referred to as low-shot learning, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes for which training examples are abundant. We consider a semi-supervised setting based on a large collection of images to support label propagation. This is possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling label propagation up to hundred millions of images leads to state of the art accuracy in the low-shot learning regime.
Matthijs Douze, Arthur Szlam, Bharath Hariharan, Herv\'e J\'egou• 2017
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Generalized Few-Shot Learning | ImageNet 2012 (Novel classes) | Top-5 Accuracy57.6 | 70 | |
| Few-shot Image Classification | ImageNet FS (novel) | Top-5 Acc0.776 | 59 | |
| Low-shot Image Classification | ImageNet-1k (val) | Top-5 Accuracy86.3 | 40 | |
| Image Classification | ImageNet FS | Top-5 Acc (Novel, 1-shot)57.7 | 13 | |
| Image Classification | ImageNet in-domain | -- | 5 |
Showing 5 of 5 rows