Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gradient Estimation with Discrete Stein Operators

About

Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems. However, when the distribution is discrete, most common gradient estimators suffer from excessive variance. To improve the quality of gradient estimation, we introduce a variance reduction technique based on Stein operators for discrete distributions. We then use this technique to build flexible control variates for the REINFORCE leave-one-out estimator. Our control variates can be adapted online to minimize variance and do not require extra evaluations of the target function. In benchmark generative modeling tasks such as training binary variational autoencoders, our gradient estimator achieves substantially lower variance than state-of-the-art estimators with the same number of function evaluations.

Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey• 2022

Related benchmarks

TaskDatasetResultRank
Log-likelihood estimationMNIST dynamically binarized (test)
Log-Likelihood-98.72
48
Generative ModelingMNIST (train)--
42
Generative ModelingFashion-MNIST (train)--
30
Generative ModelingOmniglot (train)--
30
Binary Latent VAE TrainingMNIST (train)
Avg ELBO692.4
14
Binary Latent VAE TrainingFashion-MNIST (train)
Average ELBO196.6
14
Binary Latent VAE TrainingOmniglot (train)
Average ELBO461.9
14
Generative ModelingDynamically binarized MNIST (test)
NELBO-97.43
13
Generative ModelingMNIST dynamically binarized (train)
Training ELBO-97.21
9
Generative ModelingFashion-MNIST dynamically binarized (train)
ELBO (Train)-234.1
9
Showing 10 of 18 rows

Other info

Code

Follow for update