Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Feature Hallucination Networks for Few-Shot Learning

About

The recent flourish of deep learning in various tasks is largely accredited to the rich and accessible labeled data. Nonetheless, massive supervision remains a luxury for many real applications, boosting great interest in label-scarce techniques such as few-shot learning (FSL), which aims to learn concept of new classes with a few labeled samples. A natural approach to FSL is data augmentation and many recent works have proved the feasibility by proposing various data synthesis models. However, these models fail to well secure the discriminability and diversity of the synthesized data and thus often produce undesirable results. In this paper, we propose Adversarial Feature Hallucination Networks (AFHN) which is based on conditional Wasserstein Generative Adversarial networks (cWGAN) and hallucinates diverse and discriminative features conditioned on the few labeled samples. Two novel regularizers, i.e., the classification regularizer and the anti-collapse regularizer, are incorporated into AFHN to encourage discriminability and diversity of the synthesized features, respectively. Ablation study verifies the effectiveness of the proposed cWGAN based feature hallucination framework and the proposed regularizers. Comparative results on three common benchmark datasets substantiate the superiority of AFHN to existing data augmentation based FSL approaches and other state-of-the-art ones.

Kai Li, Yulun Zhang, Kunpeng Li, Yun Fu• 2020

Related benchmarks

TaskDatasetResultRank
5-way ClassificationminiImageNet (test)--
231
Few-shot classificationMini-ImageNet
1-shot Acc62.38
175
5-way Few-shot ClassificationMini-Imagenet (test)
1-shot Accuracy62.38
141
Few-shot classificationMiniImagenet
5-way 5-shot Accuracy78.16
98
Few-shot classificationCUB-200 2011
Accuracy83.95
33
Showing 5 of 5 rows

Other info

Follow for update