VR-NeRF: High-Fidelity Virtualized Walkable Spaces
About
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in unprecedented quality and density. We extend instant neural graphics primitives with a novel perceptual color space for learning accurate HDR appearance, and an efficient mip-mapping mechanism for level-of-detail rendering with anti-aliasing, while carefully optimizing the trade-off between quality and speed. Our multi-GPU renderer enables high-fidelity volume rendering of our neural radiance field model at the full VR resolution of dual 2K$\times$2K at 36 Hz on our custom demo machine. We demonstrate the quality of our results on our challenging high-fidelity datasets, and compare our method and datasets to existing baselines. We release our dataset on our project website.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Novel View Synthesis | ScanNet++ (test) | LPIPS0.301 | 15 | |
| Novel View Synthesis | Eyeful Tower Pinhole 1.0 | PSNR28.08 | 8 | |
| Novel View Synthesis | Eyeful Tower Fisheye 1.0 | PSNR34.53 | 7 | |
| Novel View Synthesis | Eyeful Tower 1.0 (Overall) | PSNR31.01 | 7 |