Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Image to Image Translation for Domain Adaptation

About

We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-toimage translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.

Zak Murez, Soheil Kolouri, David Kriegman, Ravi Ramamoorthi, Kyungnam Kim• 2017

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes GTA5 to Cityscapes adaptation (val)
mIoU (Overall)35.7
352
Image ClassificationOffice-31
Average Accuracy74.1
261
Semantic segmentationGTA5 to Cityscapes (test)
mIoU35.4
151
Digit ClassificationMNIST -> USPS (test)
Accuracy92.1
65
Digit ClassificationUSPS → MNIST target (test)
Accuracy87.2
58
Digit ClassificationSVHN → MNIST target (test)
Accuracy80.3
37
Domain Adaptation ClassificationMNIST to USPS
Accuracy95.1
26
Domain Adaptation ClassificationSVHN to MNIST
Accuracy92.1
25
Domain Adaptation ClassificationUSPS to MNIST
Accuracy92.2
23
Showing 9 of 9 rows

Other info

Follow for update