Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models

About

While diffusion models excel at generating high-quality samples, their latent variables typically lack semantic meaning and are not suitable for representation learning. Here, we propose InfoDiffusion, an algorithm that augments diffusion models with low-dimensional latent variables that capture high-level factors of variation in the data. InfoDiffusion relies on a learning objective regularized with the mutual information between observed and hidden variables, which improves latent space quality and prevents the latents from being ignored by expressive diffusion-based decoders. Empirically, we find that InfoDiffusion learns disentangled and human-interpretable latent representations that are competitive with state-of-the-art generative and contrastive methods, while retaining the high sample quality of diffusion models. Our method enables manipulating the attributes of generated images and has the potential to assist tasks that require exploring a learned latent space to generate quality samples, e.g., generative design.

Yingheng Wang, Yair Schiff, Aaron Gokaslan, Weishen Pan, Fei Wang, Christopher De Sa, Volodymyr Kuleshov• 2023

Related benchmarks

TaskDatasetResultRank
Image GenerationCelebA 64 x 64 (test)
FID22.3
203
Image GenerationCelebA (test)
FID23.6
49
Disentangled Representation LearningCelebA 64x64 (test)
TAD0.299
10
Disentangled Representation LearningCelebA (test)
TAD0.299
6
Showing 4 of 4 rows

Other info

Follow for update