Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Disentangling Factors of Variation Using Few Labels

About

Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a limited amount of supervision, for example through manual labeling of (some) factors of variation in a few training examples. In this paper, we investigate the impact of such supervision on state-of-the-art disentanglement methods and perform a large scale study, training over 52000 models under well-defined and reproducible experimental conditions. We observe that a small number of labeled examples (0.01--0.5\% of the data set), with potentially imprecise and incomplete labels, is sufficient to perform model selection on state-of-the-art unsupervised models. Further, we investigate the benefit of incorporating supervision into the training process. Overall, we empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.

Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar R\"atsch, Bernhard Sch\"olkopf, Olivier Bachem• 2019

Related benchmarks

TaskDatasetResultRank
Molecular GenerationQM9
Validity100
30
Disentangled Representation LearningdSprites--
9
Property PredictiondSprites
Size Prediction Error0.0031
6
Property PredictionPendulum
Pendulum Angle9.9455
6
Image GenerationdSprites
Neg LogProb0.23
4
DisentanglementPendulum
Average MI2.23
4
Molecular GenerationQAC
Validity100
3
Property ControlQM9
LogP50.55
3
Property ControlQAC
logP15.13
3
Showing 9 of 9 rows

Other info

Follow for update