3D-aware Image Synthesis via Learning Structural and Textural Representations
About
Making generative models 3D-aware bridges the 2D image space and the 3D physical world yet remains challenging. Recent attempts equip a Generative Adversarial Network (GAN) with a Neural Radiance Field (NeRF), which maps 3D coordinates to pixel values, as a 3D prior. However, the implicit function in NeRF has a very local receptive field, making the generator hard to become aware of the global structure. Meanwhile, NeRF is built on volume rendering which can be too costly to produce high-resolution results, increasing the optimization difficulty. To alleviate these two problems, we propose a novel framework, termed as VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation. We first learn a feature volume to represent the underlying structure, which is then converted to a feature field using a NeRF-like model. The feature field is further accumulated into a 2D feature map as the textural representation, followed by a neural renderer for appearance synthesis. Such a design enables independent control of the shape and the appearance. Extensive experiments on a wide range of datasets show that our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Unconditional image synthesis | FFHQ 256x256 (test) | FID9.1 | 31 | |
| Image Synthesis | FFHQ | FID9.1 | 16 | |
| Image Generation | CARLA 128 x 128 (test) | FID7.9 | 9 | |
| Image Synthesis | Carla (full dataset) | FID7.9 | 7 | |
| 3D-aware Image Synthesis | AFHQ Cat (test) | FID5.136 | 6 | |
| 3D-aware Image Synthesis | LSUN Bedroom (test) | FID18.107 | 6 | |
| 3D-aware Image Synthesis | FFHQ (test) | FID9.598 | 6 |