Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion-based Image Translation using Disentangled Style and Content Representation

About

Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer which is not limited to the specific domains. Unfortunately, due to the stochastic nature of diffusion models, it is often difficult to maintain the original content of the image during the reverse diffusion. To address this, here we present a novel diffusion-based unsupervised image translation method using disentangled style and content representation. Specifically, inspired by the splicing Vision Transformer, we extract intermediate keys of multihead self attention layer from ViT model and used them as the content preservation loss. Then, an image guided style transfer is performed by matching the [CLS] classification token from the denoised samples and target image, whereas additional CLIP loss is used for the text-driven style transfer. To further accelerate the semantic change during the reverse diffusion, we also propose a novel semantic divergence loss and resampling strategy. Our experimental results show that the proposed method outperforms state-of-the-art baseline models in both text-guided and image-guided translation tasks.

Gihyun Kwon, Jong Chul Ye• 2022

Related benchmarks

TaskDatasetResultRank
Style TransferMS-COCO (content) + WikiArt (style) (test)
LPIPS0.5786
31
Artistic transferWikiArt
FID (Style)23.065
11
Photo-realistic transferMSCOCO
FID (Style)35.314
11
Face VerificationSpeakingFace (test)
Rank-1 Acc62.6
6
Identity VerificationARL-VTF
Rank-1 Acc54.67
6
Thermal-to-Visible Face TranslationSpeakingFaces
SSIM0.6912
6
Thermal-to-Visible Image TranslationARL-VTF (test)
SSIM0.6467
6
Face StylizationAAHQ (low-density)
ArtFID44.93
5
Face StylizationMetFaces (low-density)
ArtFID53.35
5
Face StylizationPrev Style Images (low-density)
ArtFID48.18
5
Showing 10 of 11 rows

Other info

Follow for update