Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compositional Image Decomposition with Diffusion Models

About

Given an image of a natural scene, we are able to quickly decompose it into a set of components such as objects, lighting, shadows, and foreground. We can then envision a scene where we combine certain components with those from other images, for instance a set of objects from our bedroom and animals from a zoo under the lighting conditions of a forest, even if we have never encountered such a scene before. In this paper, we present a method to decompose an image into such compositional components. Our approach, Decomp Diffusion, is an unsupervised method which, when given a single image, infers a set of different components in the image, each represented by a diffusion model. We demonstrate how components can capture different factors of the scene, ranging from global scene descriptors like shadows or facial expression to local scene descriptors like constituent objects. We further illustrate how inferred factors can be flexibly composed, even with factors inferred from other models, to generate a variety of scenes sharply different than those seen in training time. Website and code at https://energy-based-model.github.io/decomp-diffusion.

Jocelin Su, Nan Liu, Yanbo Wang, Joshua B. Tenenbaum, Yilun Du• 2024

Related benchmarks

TaskDatasetResultRank
Image GenerationCLEVR
FID25.7
13
Image ReconstructionCelebA-HQ
FID82.7
9
Image RecombinationFalcor3D
FID157.1
2
Image RecombinationvKITTI
FID88.46
2
Showing 4 of 4 rows

Other info

Follow for update