CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator
About
Accurately backpropagating the gradient through categorical variables is a challenging task that arises in various domains, such as training discrete latent variable models. To this end, we propose CARMS, an unbiased estimator for categorical random variables based on multiple mutually negatively correlated (jointly antithetic) samples. CARMS combines REINFORCE with copula based sampling to avoid duplicate samples and reduce its variance, while keeping the estimator unbiased using importance sampling. It generalizes both the ARMS antithetic estimator for binary variables, which is CARMS for two categories, as well as LOORF/VarGrad, the leave-one-out REINFORCE estimator, which is CARMS with independent samples. We evaluate CARMS on several benchmark datasets on a generative modeling task, as well as a structured output prediction task, and find it to outperform competing methods including a strong self-control baseline. The code is publicly available.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Log-likelihood estimation | MNIST dynamically binarized (test) | Log-Likelihood-92.97 | 48 | |
| Generative Modeling | Dynamic MNIST (train) | Log Likelihood-92.13 | 30 | |
| Generative Modeling | Fashion-MNIST (train) | Log Likelihood (100 samples)-230.8 | 30 | |
| Generative Modeling | Omniglot (train) | Log Likelihood-108.6 | 30 | |
| VAE Log-Likelihood Estimation | Fashion MNIST (test) | Log-Likelihood-233.4 | 30 | |
| Variational Inference | Omniglot (test) | Test Log Likelihood-112.7 | 30 | |
| Conditional estimation | Dynamic MNIST (test) | Test Log Likelihood60.01 | 18 | |
| Conditional estimation | Dynamic MNIST (train) | Final Log Likelihood58.35 | 15 | |
| Conditional estimation | Omniglot (train) | Final Training Log Likelihood66.94 | 15 | |
| Conditional estimation | Omniglot (test) | Test Log Likelihood72.88 | 15 |