Few-shot Learning with Noisy Labels
About
Few-shot learning (FSL) methods typically assume clean support sets with accurately labeled samples when training on novel classes. This assumption can often be unrealistic: support sets, no matter how small, can still include mislabeled samples. Robustness to label noise is therefore essential for FSL methods to be practical, but this problem surprisingly remains largely unexplored. To address mislabeled samples in FSL settings, we make several technical contributions. (1) We offer simple, yet effective, feature aggregation methods, improving the prototypes used by ProtoNet, a popular FSL technique. (2) We describe a novel Transformer model for Noisy Few-Shot Learning (TraNFS). TraNFS leverages a transformer's attention mechanism to weigh mislabeled versus correct samples. (3) Finally, we extensively test these methods on noisy versions of MiniImageNet and TieredImageNet. Our results show that TraNFS is on-par with leading FSL methods on clean support sets, yet outperforms them, by far, in the presence of label noise.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Few-shot classification | tieredImageNet (test) | Accuracy71.48 | 282 | |
| Few-shot classification | Mini-Imagenet (test) | Accuracy68.53 | 113 | |
| Few-shot classification | MiniImageNet 5-way 10-shot (test) | Accuracy70.38 | 69 | |
| Few-shot Image Classification | MiniImageNet 5-way 10-shot (test) | Accuracy (0% noise)73.69 | 46 | |
| Few-shot Image Classification | MiniImageNet 5-way 3-shot (test) | Accuracy63.63 | 46 | |
| 5-Shot 5-Way Classification | miniImageNet (test) | Accuracy53.96 | 36 | |
| 5-way 5-shot Classification | tiered-ImageNet (test) | Accuracy55.12 | 32 | |
| 5-way 3-shot Image Classification | MiniImageNet 0% noise 54 (test) | Accuracy64.28 | 23 | |
| 5-way 3-shot Image Classification | MiniImageNet 33.3% symmetric label swap noise 54 (test) | Accuracy53.84 | 23 | |
| 5-way 5-shot Classification | MiniImagenet | Accuracy (0% Noise)68.51 | 16 |