Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generalizing to Unseen Domains via Adversarial Data Augmentation

About

We are concerned with learning models that generalize well to different \emph{unseen} domains. We consider a worst-case formulation over data distributions that are near the source domain in the feature space. Only using training data from a single source distribution, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration. For softmax losses, we show that our method is a data-dependent regularization scheme that behaves differently from classical regularizers that regularize towards zero (e.g., ridge or lasso). On digit recognition and semantic segmentation tasks, our method learns models improve performance across a range of a priori unknown target domains.

Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, Silvio Savarese• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy76.5
3518
Image ClassificationCIFAR-10 (test)
Accuracy95.2
3381
Image ClassificationTinyImageNet (test)
Accuracy66.5
366
Image ClassificationSVHN (test)
Accuracy97.5
362
Question AnsweringSQuAD v1.1 (test)
F1 Score84.19
260
Image ClassificationPACS (test)
Average Accuracy81.44
254
Image ClassificationPACS
Overall Average Accuracy81.44
230
Domain GeneralizationPACS (test)
Average Accuracy81.44
225
Domain GeneralizationPACS (leave-one-domain-out)
Art Accuracy78.32
146
Image ClassificationCIFAR-10-C
Accuracy59.91
127
Showing 10 of 41 rows

Other info

Follow for update