Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decouple-Then-Merge: Finetune Diffusion Models as Multi-Task Learning

About

Diffusion models are trained by learning a sequence of models that reverse each step of noise corruption. Typically, the model parameters are fully shared across multiple timesteps to enhance training efficiency. However, since the denoising tasks differ at each timestep, the gradients computed at different timesteps may conflict, potentially degrading the overall performance of image generation. To solve this issue, this work proposes a \textbf{De}couple-then-\textbf{Me}rge (\textbf{DeMe}) framework, which begins with a pretrained model and finetunes separate models tailored to specific timesteps. We introduce several improved techniques during the finetuning stage to promote effective knowledge sharing while minimizing training interference across timesteps. Finally, after finetuning, these separate models can be merged into a single model in the parameter space, ensuring efficient and practical inference. Experimental results show significant generation quality improvements upon 6 benchmarks including Stable Diffusion on COCO30K, ImageNet1K, PartiPrompts, and DDPM on LSUN Church, LSUN Bedroom, and CIFAR10. Code is available at \href{https://github.com/MqLeet/DeMe}{GitHub}.

Qianli Ma, Xuefei Ning, Dongrui Liu, Li Niu, Linfeng Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Unconditional Image GenerationCIFAR-10 (test)
FID3.51
216
Image GenerationCIFAR10 32x32 (test)
FID3.51
154
Text-to-Image GenerationMS-COCO
FID12.78
75
Image GenerationLSUN Church 256x256 (test)
FID7.27
55
Text-to-Image GenerationPartiPrompts
CLIP Score30.02
26
Unconditional Image GenerationLSUN Church (test)
FID7.27
17
Unconditional Image GenerationLSUN Bedroom (test)
FID5.84
14
Text-to-Image GenerationImageNet
FID26.36
9
Image GenerationLSUN-Bedroom 256 x 256 (test val)
FID5.84
5
Showing 9 of 9 rows

Other info

Follow for update