Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Style-aware Discriminator for Controllable Image Translation

About

Current image-to-image translations do not control the output domain beyond the classes used during training, nor do they interpolate between different domains well, leading to implausible results. This limitation largely arises because labels do not consider the semantic distance. To mitigate such problems, we propose a style-aware discriminator that acts as a critic as well as a style encoder to provide conditions. The style-aware discriminator learns a controllable style space using prototype-based self-supervised learning and simultaneously guides the generator. Experiments on multiple datasets verify that the proposed model outperforms current state-of-the-art image-to-image translation methods. In contrast with current methods, the proposed approach supports various applications, including style interpolation, content transplantation, and local image translation.

Kunhee Kim, Sanghun Park, Eunyeong Jeon, Taehun Kim, Daijin Kim• 2022

Related benchmarks

TaskDatasetResultRank
Image-to-Image TranslationCelebA-HQ
FID41.33
28
Image-to-Image TranslationCelebA-HQ gender as class (test)
TC10.462
7
Image-to-Image TranslationAFHQ 3 classes (test)
TC3.241
7
Image-to-Image TranslationFood-10
mFID49.34
5
Image-to-Image TranslationAnimalFaces-10
mFID36.83
5
Showing 5 of 5 rows

Other info

Follow for update