Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adversarial Concept Distillation for One-Step Diffusion Personalization

About

Recent progress in accelerating text-to-image diffusion models enables high-fidelity synthesis within a single denoising step. However, customizing the fast one-step models remains challenging, as existing methods consistently fail to produce acceptable results, underscoring the need for new methodologies to personalize one-step models. Therefore, we propose One-step Personalized Adversarial Distillation (OPAD), a framework that combines teacher-student distillation with adversarial supervision. A multi-step diffusion model serves as the teacher, while a one-step student model is jointly trained with it. The student learns from alignment losses that preserve consistency with the teacher and from adversarial losses that align its output with real image distributions. Beyond one-step personalization, we further observe that the student's efficient generation and adversarially enriched representations provide valuable feedback to improve the teacher model, forming a collaborative learning stage. Extensive experiments demonstrate that OPAD is the first approach to deliver reliable, high-quality personalization for one-step diffusion models; in contrast, prior methods largely fail and produce severe failure cases, while OPAD preserves single-step efficiency.

Yixiong Yang, Tao Wu, Senmao Li, Shiqi Yang, Yaxing Wang, Joost van de Weijer, Kai Wang• 2025

Related benchmarks

TaskDatasetResultRank
Personalized Image GenerationDreamBooth
CLIP-I Score78.3
34
Customized Text-to-Image GenerationDreamBench (test)
DINO Score0.637
21
Subject-driven Text-to-Image GenerationDreamBooth
Preference Rate66.9
4
Showing 3 of 3 rows

Other info

Follow for update