Entropy-based Training Methods for Scalable Neural Implicit Sampler
About
Efficiently sampling from un-normalized target distributions is a fundamental problem in scientific computing and machine learning. Traditional approaches such as Markov Chain Monte Carlo (MCMC) guarantee asymptotically unbiased samples from such distributions but suffer from computational inefficiency, particularly when dealing with high-dimensional targets, as they require numerous iterations to generate a batch of samples. In this paper, we introduce an efficient and scalable neural implicit sampler that overcomes these limitations. The implicit sampler can generate large batches of samples with low computational costs by leveraging a neural transformation that directly maps easily sampled latent vectors to target samples without the need for iterative procedures. To train the neural implicit samplers, we introduce two novel methods: the KL training method and the Fisher training method. The former method minimizes the Kullback-Leibler divergence, while the latter minimizes the Fisher divergence between the sampler and the target distributions. By employing the two training methods, we effectively optimize the neural implicit samplers to learn and generate from the desired target distribution. To demonstrate the effectiveness, efficiency, and scalability of our proposed samplers, we evaluate them on three sampling benchmarks with different scales.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 2D Synthetic Target Sampling | FUNNEL 2D Synthetic | KSD0.115 | 8 | |
| 2D Synthetic Target Sampling | SQUIGGLE Synthetic 2D | KSD0.118 | 8 | |
| 2D Synthetic Target Sampling | MOG2 2D Synthetic | KSD0.104 | 8 | |
| 2D Synthetic Target Sampling | ROSENBROCK 2D Synthetic | KSD0.123 | 8 | |
| 2D Synthetic Target Sampling | Gaussian 2D Synthetic | KSD0.095 | 8 | |
| 2D Synthetic Target Sampling | DONUT 2D Synthetic | KSD0.109 | 8 | |
| Bayesian Logistic Regression | Covertype (test) | Accuracy76.22 | 6 |