Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Uni-DAD: Unified Distillation and Adaptation of Diffusion Models for Few-step Few-shot Image Generation

About

Diffusion models (DMs) produce high-quality images, yet their sampling remains costly when adapted to new domains. Distilled DMs are faster but typically remain confined within their teacher's domain. Thus, fast and high-quality generation for novel domains relies on two-stage pipelines: Adapt-then-Distill or Distill-then-Adapt. However, both add design complexity and often degrade quality or diversity. We introduce Uni-DAD, a single-stage pipeline that unifies DM distillation and adaptation. It couples two training signals: (i) a dual-domain distribution-matching distillation (DMD) objective that guides the student toward the distributions of the source teacher and a target teacher, and (ii) a multi-head generative adversarial network (GAN) loss that encourages target realism across multiple feature scales. The source domain distillation preserves diverse source knowledge, while the multi-head GAN stabilizes training and reduces overfitting, especially in few-shot regimes. The inclusion of a target teacher facilitates adaptation to more structurally distant domains. We evaluate Uni-DAD on two comprehensive benchmarks for few-shot image generation (FSIG) and subject-driven personalization (SDP) using diffusion backbones. It delivers better or comparable quality to state-of-the-art (SoTA) adaptation methods even with less than 4 sampling steps, and often surpasses two-stage pipelines in quality and diversity. Code: https://github.com/yaramohamadi/uni-DAD.

Yara Bahram, M\'elodie Desbos, Mohammadhadi Shateri, Eric Granger• 2025

Related benchmarks

TaskDatasetResultRank
Few-shot Image GenerationSunglasses 10-shot
FID22.57
43
Few-shot Image GenerationMetFaces 10-shot
FID58.13
40
Few-shot Image GenerationBabies 10-shot
FID45.09
7
Few-shot Image GenerationCats 10-shot
FID55.32
6
Showing 4 of 4 rows

Other info

Follow for update