Unsupervised Cross-Domain Image Generation
About
We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Domain Adaptation | SVHN to MNIST (test) | Accuracy79.72 | 53 | |
| Unsupervised Domain Adaptation | SVHN → MNIST (test) | Accuracy84.4 | 41 | |
| Domain Adaptation Classification | SVHN to MNIST | Accuracy84.7 | 25 | |
| Face Identity Retrieval | Facescrub 100,001 face images (test) | Median Rank16 | 2 | |
| Domain Adaptation | SVHN to MNIST (train) | Accuracy84.44 | 1 |