Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback

About

To enhance the controllability of text-to-image diffusion models, existing efforts like ControlNet incorporated image-based conditional controls. In this paper, we reveal that existing methods still face significant challenges in generating images that align with the image conditional controls. To this end, we propose ControlNet++, a novel approach that improves controllable generation by explicitly optimizing pixel-level cycle consistency between generated images and conditional controls. Specifically, for an input conditional control, we use a pre-trained discriminative reward model to extract the corresponding condition of the generated images, and then optimize the consistency loss between the input conditional control and extracted condition. A straightforward implementation would be generating images from random noises and then calculating the consistency loss, but such an approach requires storing gradients for multiple sampling timesteps, leading to considerable time and memory costs. To address this, we introduce an efficient reward strategy that deliberately disturbs the input images by adding noise, and then uses the single-step denoised images for reward fine-tuning. This avoids the extensive costs associated with image sampling, allowing for more efficient reward fine-tuning. Extensive experiments show that ControlNet++ significantly improves controllability under various conditional controls. For example, it achieves improvements over ControlNet by 11.1% mIoU, 13.4% SSIM, and 7.6% RMSE, respectively, for segmentation mask, line-art edge, and depth conditions. All the code, models, demo and organized data have been open sourced on our Github Repo.

Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, Chen Chen• 2024

Related benchmarks

TaskDatasetResultRank
SegmentationADE20K
mIoU43.64
52
Depth-conditioned fine-grained image generationCUB-200 (test)
FID26.7
14
Sketch-conditioned fine-grained image generationCUB-200 (test)
FID27.47
14
Scribble-to-image generationCOCO (val)
FID24.86
10
Pixel-level Spatial Control (Depth)MultiGen-20M
RMSE28.32
8
Pixel-level Spatial Control (Canny)MultiGen-20M
F1 Score37.04
8
Segmentation-conditioned Image GenerationCOCO
CLIP Score27.19
7
Segmentation-to-Image GenerationCOCO (val)
FID25.63
7
Depth-conditioned Image GenerationCOCO
CLIP Score27.26
7
Sketch-conditioned image generationCOCO
CLIP Score27.24
7
Showing 10 of 17 rows

Other info

Follow for update