Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Few-Shot Adversarial Domain Adaptation

About

This work provides a framework for addressing the problem of supervised domain adaptation with deep models. The main idea is to exploit adversarial learning to learn an embedded subspace that simultaneously maximizes the confusion between two domains while semantically aligning their embedding. The supervised setting becomes attractive especially when there are only a few target data samples that need to be labeled. In this few-shot learning scenario, alignment and separation of semantic probability distributions is difficult because of the lack of data. We found that by carefully designing a training scheme whereby the typical binary adversarial discriminator is augmented to distinguish between four different classes, it is possible to effectively address the supervised adaptation problem. In addition, the approach has a high speed of adaptation, i.e. it requires an extremely low number of labeled target training samples, even one per category can be effective. We then extensively compare this approach to the state of the art in domain adaptation in two experiments: one using datasets for handwritten digit recognition, and one using datasets for visual object recognition.

Saeid Motiian, Quinn Jones, Seyed Mehdi Iranmanesh, Gianfranco Doretto• 2017

Related benchmarks

TaskDatasetResultRank
Domain AdaptationSVHN to MNIST (test)
Accuracy87.2
53
Unsupervised Domain AdaptationUSPS -> MNIST (test)
Accuracy91.5
30
Charge PredictionNCCP (non-PLLS)
Accuracy62.3
8
Domain AdaptationMNIST to SVHN (test)
Accuracy47
8
Showing 4 of 4 rows

Other info

Follow for update