Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot Images
About
In this paper, we present a method to optimize Gaussian splatting with a limited number of images while avoiding overfitting. Representing a 3D scene by combining numerous Gaussian splats has yielded outstanding visual quality. However, it tends to overfit the training views when only a small number of images are available. To address this issue, we introduce a dense depth map as a geometry guide to mitigate overfitting. We obtained the depth map using a pre-trained monocular depth estimation model and aligning the scale and offset using sparse COLMAP feature points. The adjusted depth aids in the color-based optimization of 3D Gaussian splatting, mitigating floating artifacts, and ensuring adherence to geometric constraints. We verify the proposed method on the NeRF-LLFF dataset with varying numbers of few images. Our approach demonstrates robust geometry compared to the original method that relies solely on images. Project page: robot0321.github.io/DepthRegGS
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Novel View Synthesis | Tanks&Temples (test) | PSNR21.46 | 239 | |
| Novel View Synthesis | Mip-NeRF 360 (test) | PSNR16.88 | 166 | |
| Few-shot Novel View Synthesis | LLFF static scenes 3 views | PSNR17.17 | 10 | |
| Novel View Synthesis | MVImgNet (test) | PSNR21.7 | 8 |