Manifold Learning Benefits GANs
About
In this paper, we improve Generative Adversarial Networks by incorporating a manifold learning step into the discriminator. We consider locality-constrained linear and subspace-based manifolds, and locality-constrained non-linear manifolds. In our design, the manifold learning and coding steps are intertwined with layers of the discriminator, with the goal of attracting intermediate feature representations onto manifolds. We adaptively balance the discrepancy between feature representations and their manifold view, which is a trade-off between denoising on the manifold and refining the manifold. We find that locality-constrained non-linear manifolds outperform linear manifolds due to their non-uniform density and smoothness. We also substantially outperform state-of-the-art baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Generation | CIFAR-10 (test) | -- | 471 | |
| Image Generation | ImageNet 64x64 (train val) | FID4.26 | 83 | |
| Image Generation | ImageNet 128x128 | -- | 51 | |
| Image Generation | CIFAR-100 (20% data) | IS13.78 | 41 | |
| Image Generation | CIFAR-100 (10% data) | Inception Score12.67 | 41 | |
| Image Generation | CIFAR-10 (20% data) | Inception Score10.12 | 35 | |
| Image Generation | CIFAR-10 (10% data) | Inception Score10.04 | 35 | |
| Image Generation | CIFAR-100 (full data) | Inception Score13.8 | 35 | |
| Image Generation | CIFAR-100 (test) | IS13.88 | 35 | |
| Image Generation | CIFAR10 (train) | -- | 32 |