Glow: Generative Flow with Invertible 1x1 Convolutions
About
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow
Diederik P. Kingma, Prafulla Dhariwal• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Generation | CIFAR-10 (test) | FID46.9 | 471 | |
| Unconditional Image Generation | CIFAR-10 (test) | -- | 216 | |
| Image Generation | CIFAR-10 | -- | 178 | |
| Unconditional Image Generation | CIFAR-10 unconditional | FID48.9 | 159 | |
| Image Generation | CIFAR10 32x32 (test) | FID48.9 | 154 | |
| Out-of-Distribution Detection | Textures | AUROC0.27 | 141 | |
| Density Estimation | CIFAR-10 (test) | Bits/dim3.35 | 134 | |
| Unconditional Generation | CIFAR-10 (test) | FID48.9 | 102 | |
| Out-of-Distribution Detection | CIFAR-10 vs SVHN (test) | AUROC0.64 | 101 | |
| Out-of-Distribution Detection | CIFAR-10 vs CIFAR-100 (test) | AUROC65 | 93 |
Showing 10 of 83 rows
...