Zero-Shot Text-Guided Object Generation with Dream Fields
About
We combine neural rendering with multi-modal image and text representations to synthesize diverse 3D objects solely from natural language descriptions. Our method, Dream Fields, can generate the geometry and color of a wide range of objects without 3D supervision. Due to the scarcity of diverse, captioned 3D data, prior methods only generate objects from a handful of categories, such as ShapeNet. Instead, we guide generation with image-text models pre-trained on large datasets of captioned images from the web. Our method optimizes a Neural Radiance Field from many camera views so that rendered images score highly with a target caption according to a pre-trained CLIP model. To improve fidelity and visual quality, we introduce simple geometric priors, including sparsity-inducing transmittance regularization, scene bounds, and new MLP architectures. In experiments, Dream Fields produce realistic, multi-view consistent object geometry and color from a variety of natural language captions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-conditioned 3D Generation | Text-to-3D Generation | Generation Latency (h)1.2 | 7 | |
| Text-guided 3D synthesis | Manually created dataset of diverse text prompts and objects | CLIP R-Precision (ViT-B/32)63.24 | 5 | |
| Text-to-3D | Objaverse 1.0 (test) | FID106.1 | 4 | |
| Text-to-3D Generation | 1000 text prompts (test) | Time (min)72 | 3 |