Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning
About
We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient that maximumly decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. As an application of our method, we propose an amortized MLE algorithm for training deep energy model, where a neural sampler is adaptively trained to approximate the likelihood function. Our method mimics an adversarial game between the deep energy model and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 2D Synthetic Target Sampling | Gaussian 2D Synthetic | KSD0.091 | 8 | |
| 2D Synthetic Target Sampling | ROSENBROCK 2D Synthetic | KSD0.121 | 8 | |
| 2D Synthetic Target Sampling | DONUT 2D Synthetic | KSD0.104 | 8 | |
| 2D Synthetic Target Sampling | SQUIGGLE Synthetic 2D | KSD0.124 | 8 | |
| 2D Synthetic Target Sampling | FUNNEL 2D Synthetic | KSD0.129 | 8 | |
| 2D Synthetic Target Sampling | MOG2 2D Synthetic | KSD0.131 | 8 | |
| Bayesian Logistic Regression | Covertype (test) | Accuracy75.37 | 6 |