Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adding Conditional Control to Text-to-Image Diffusion Models

About

We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.

Lvmin Zhang, Anyi Rao, Maneesh Agrawala• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes
mIoU52.12
578
Text-to-Image GenerationGenEval
GenEval Score46
277
Polyp SegmentationCVC-ClinicDB (test)
DSC93.7
196
Polyp SegmentationKvasir
Dice Score91.1
128
Polyp SegmentationETIS
Dice Score78.7
108
Polyp SegmentationETIS (test)
Mean Dice80.9
86
Object DetectionMS-COCO
AP36.9
77
Skin Lesion SegmentationISIC 2018 (test)
Dice Score91.52
74
Polyp SegmentationColonDB
mDice79.7
74
Polyp SegmentationKvasir (test)
Dice Coefficient92
73
Showing 10 of 220 rows
...

Other info

Follow for update