Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Estimating Gradients for Discrete Random Variables by Sampling without Replacement

About

We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.

Wouter Kool, Herke van Hoof, Max Welling• 2020

Related benchmarks

TaskDatasetResultRank
Log-likelihood estimationMNIST dynamically binarized (test)
Log-Likelihood-93.69
48
Generative ModelingDynamic MNIST (train)
Log Likelihood-92.98
30
Generative ModelingFashion-MNIST (train)
Log Likelihood (100 samples)-231.7
30
VAE Log-Likelihood EstimationFashion MNIST (test)
Log-Likelihood-234.4
30
Generative ModelingOmniglot (train)
Log Likelihood-109.5
30
Variational InferenceOmniglot (test)
Test Log Likelihood-113.7
30
Conditional estimationDynamic MNIST (test)
Test Log Likelihood59.93
18
Conditional estimationOmniglot (test)
Test Log Likelihood72.94
15
Conditional estimationFashion MNIST (test)
Test Log Likelihood135.5
15
Conditional estimationFashion-MNIST (train)
Final Training Log Likelihood133.4
15
Showing 10 of 12 rows

Other info

Follow for update