Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Decomposition and Recombination with Discriminator-Driven Diffusion Models

About

Decomposing complex data into factorized representations can reveal reusable components and enable synthesizing new samples via component recombination. We investigate this in the context of diffusion-based models that learn factorized latent spaces without factor-level supervision. In images, factors can capture background, illumination, and object attributes; in robotic videos, they can capture reusable motion components. To improve both latent factor discovery and quality of compositional generation, we introduce an adversarial training signal via a discriminator trained to distinguish between single-source samples and those generated by recombining factors across sources. By optimizing the generator to fool this discriminator, we encourage physical and semantic consistency in the resulting recombinations. Our method outperforms implementations of prior baselines on CelebA-HQ, Virtual KITTI, CLEVR, and Falcor3D, achieving lower FID scores and better disentanglement as measured by MIG and MCC. Furthermore, we demonstrate a novel application to robotic video trajectories: by recombining learned action components, we generate diverse sequences that significantly increase state-space coverage for exploration on the LIBERO benchmark.

Archer Wang, Emile Anand, Yilun Du, Marin Solja\v{c}i\'c• 2026

Related benchmarks

TaskDatasetResultRank
Image GenerationCLEVR
FID24.16
13
Image ReconstructionCelebA-HQ
FID43.98
9
Robotic Trajectory GenerationLIBERO Scene 5
State-space Coverage1.28e+4
5
Robotic Trajectory GenerationLIBERO Scene 6
State-space Coverage9.39e+3
5
Image RecombinationFalcor3D
FID130.2
2
Image RecombinationvKITTI
FID84.22
2
Showing 6 of 6 rows

Other info

Follow for update