Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

3D-aware Image Synthesis via Learning Structural and Textural Representations

About

Making generative models 3D-aware bridges the 2D image space and the 3D physical world yet remains challenging. Recent attempts equip a Generative Adversarial Network (GAN) with a Neural Radiance Field (NeRF), which maps 3D coordinates to pixel values, as a 3D prior. However, the implicit function in NeRF has a very local receptive field, making the generator hard to become aware of the global structure. Meanwhile, NeRF is built on volume rendering which can be too costly to produce high-resolution results, increasing the optimization difficulty. To alleviate these two problems, we propose a novel framework, termed as VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation. We first learn a feature volume to represent the underlying structure, which is then converted to a feature field using a NeRF-like model. The feature field is further accumulated into a 2D feature map as the textural representation, followed by a neural renderer for appearance synthesis. Such a design enables independent control of the shape and the appearance. Extensive experiments on a wide range of datasets show that our approach achieves sufficiently higher image quality and better 3D control than the previous methods.

Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, Bolei Zhou• 2021

Related benchmarks

TaskDatasetResultRank
Unconditional image synthesisFFHQ 256x256 (test)
FID9.1
31
Image SynthesisFFHQ
FID9.1
16
Image GenerationCARLA 128 x 128 (test)
FID7.9
9
Image SynthesisCarla (full dataset)
FID7.9
7
3D-aware Image SynthesisAFHQ Cat (test)
FID5.136
6
3D-aware Image SynthesisLSUN Bedroom (test)
FID18.107
6
3D-aware Image SynthesisFFHQ (test)
FID9.598
6
Showing 7 of 7 rows

Other info

Follow for update