Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ConceptPrism: Concept Disentanglement in Personalized Diffusion Models via Residual Token Optimization

About

Personalized text-to-image generation suffers from concept entanglement, where irrelevant residual information from reference images is captured, leading to a trade-off between concept fidelity and text alignment. Recent disentanglement approaches attempt to solve this utilizing manual guidance, such as linguistic cues or segmentation masks, which limits their applicability and fails to fully articulate the target concept. In this paper, we propose ConceptPrism, a novel framework that automatically disentangles the shared visual concept from image-specific residuals by comparing images within a set. Our method jointly optimizes a target token and image-wise residual tokens using two complementary objectives: a reconstruction loss to ensure fidelity, and a novel exclusion loss that compels residual tokens to discard the shared concept. This process allows the target token to capture the pure concept without direct supervision. Extensive experiments demonstrate that ConceptPrism effectively resolves concept entanglement, achieving a significantly improved trade-off between fidelity and alignment.

Minseo Kim, Minchan Kwon, Dongyeun Lee, Yunho Jeon, Junmo Kim• 2026

Related benchmarks

TaskDatasetResultRank
Personalized Image GenerationDreamBench 30 distinct personalized subjects
CLIP-T0.357
7
Showing 1 of 1 rows

Other info

Follow for update