Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pretraining is All You Need for Image-to-Image Translation

About

We propose to use pretraining to boost general image-to-image translation. Prior image-to-image translation methods usually need dedicated architectural design and train individual translation models from scratch, struggling for high-quality generation of complex scenes, especially when paired training data are not abundant. In this paper, we regard each image-to-image translation problem as a downstream task and introduce a simple and generic framework that adapts a pretrained diffusion model to accommodate various kinds of image-to-image translation. We also propose adversarial training to enhance the texture synthesis in the diffusion model training, in conjunction with normalized guidance sampling to improve the generation quality. We present extensive empirical comparison across various tasks on challenging benchmarks such as ADE20K, COCO-Stuff, and DIODE, showing the proposed pretraining-based image-to-image translation (PITI) is capable of synthesizing images of unprecedented realism and faithfulness.

Tengfei Wang, Ting Zhang, Bo Zhang, Hao Ouyang, Dong Chen, Qifeng Chen, Fang Wen• 2022

Related benchmarks

TaskDatasetResultRank
Semantic Image SynthesisADE20K
FID8.9
66
Semantic Image SynthesisADE20K (val)
FID27.9
47
Semantic Image SynthesisCOCO Stuff (val)
FID15.5
42
Semantic Image SynthesisCOCO Stuff
FID2.52
40
Layout-to-Image SynthesisCoco-Stuff (test)--
25
Semantic Image SynthesisADE20K (test)
FID19.74
20
Semantic Image SynthesisCOCO-Stuff to ADE20K target: 100 images
FID56.8
10
Semantic Image SynthesisADE20K to COCO-Stuff target: 100 images
FID83.7
10
Semantic Image SynthesisCOCO-Stuff to Cityscapes (target: 100 images)
FID70.8
10
Semantic Image SynthesisADE20K to Cityscapes target: 100 images
FID86.1
10
Showing 10 of 22 rows

Other info

Follow for update