Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SODA: Bottleneck Diffusion Models for Representation Learning

About

We introduce SODA, a self-supervised diffusion model, designed for representation learning. The model incorporates an image encoder, which distills a source view into a compact representation, that, in turn, guides the generation of related novel views. We show that by imposing a tight bottleneck between the encoder and a denoising decoder, and leveraging novel view synthesis as a self-supervised objective, we can turn diffusion models into strong representation learners, capable of capturing visual semantics in an unsupervised manner. To the best of our knowledge, SODA is the first diffusion model to succeed at ImageNet linear-probe classification, and, at the same time, it accomplishes reconstruction, editing and synthesis tasks across a wide range of datasets. Further investigation reveals the disentangled nature of its emergent latent space, that serves as an effective interface to control and manipulate the model's produced images. All in all, we aim to shed light on the exciting and promising potential of diffusion models, not only for image generation, but also for learning rich and robust representations.

Drew A. Hudson, Daniel Zoran, Mateusz Malinowski, Andrew K. Lampinen, Andrew Jaegle, James L. McClelland, Loic Matthey, Felix Hill, Alexander Lerchner• 2023

Related benchmarks

TaskDatasetResultRank
Image ReconstructionCelebA-HQ (test)
FID (Reconstruction)9.54
50
Image ReconstructionImageNet
PSNR23.6
43
DisentanglementSmallNORB (test)
DCI64.78
17
DisentanglementMPI3D (test)
DCI73.41
17
Novel View SynthesisShapeNet (test)
PSNR27.42
16
Novel View SynthesisGoogle Scanned Objects (GSO) (test)
PSNR24.97
14
DisentanglementCelebA-HQ (test)
Disentanglement79.93
13
Image ClassificationCelebA-HQ (test)
F1 Score72.65
13
Novel View SynthesisNMR (test)
PSNR28.71
10
Disentanglement AnalysisMPI3D Toy
Disen.87.38
8
Showing 10 of 19 rows

Other info

Code

Follow for update