Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Double Control Variates for Gradient Estimation in Discrete Latent Variable Models

About

Stochastic gradient-based optimisation for discrete latent variable models is challenging due to the high variance of gradients. We introduce a variance reduction technique for score function estimators that makes use of double control variates. These control variates act on top of a main control variate, and try to further reduce the variance of the overall estimator. We develop a double control variate for the REINFORCE leave-one-out estimator using Taylor expansions. For training discrete latent variable models, such as variational autoencoders with binary latent variables, our approach adds no extra computational cost compared to standard training with the REINFORCE leave-one-out estimator. We apply our method to challenging high-dimensional toy examples and training variational autoencoders with binary latent variables. We show that our estimator can have lower variance compared to other state-of-the-art estimators.

Michalis K. Titsias, Jiaxin Shi• 2021

Related benchmarks

TaskDatasetResultRank
Log-likelihood estimationMNIST dynamically binarized (test)
Log-Likelihood-99.16
48
Binary Latent VAE TrainingMNIST (train)
Avg ELBO686.5
14
Binary Latent VAE TrainingFashion-MNIST (train)
Average ELBO193.9
14
Binary Latent VAE TrainingOmniglot (train)
Average ELBO457.4
14
Generative ModelingDynamically binarized MNIST (test)
NELBO-97.62
13
Generative ModelingMNIST dynamically binarized (train)
Training ELBO-97.59
9
Generative ModelingFashion-MNIST dynamically binarized (train)
ELBO (Train)-234.3
9
Generative ModelingFashion-MNIST dynamically binarized (test)
Test Log-Likelihood Bound-234.3
9
Generative ModelingOmniglot dynamically binarized (train)
Training ELBO-108.7
9
Generative ModelingOMNIGLOT dynamically binarized (test)
Log-Likelihood Bound (100-point)-107.5
9
Showing 10 of 15 rows

Other info

Follow for update