Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

$R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement

About

Mesh reconstruction from Neural Radiance Fields (NeRF) is widely used in 3D reconstruction and has been applied across numerous domains. However, existing methods typically rely solely on the given training set images, which restricts supervision to limited observations and makes it difficult to fully constrain geometry and appearance. Moreover, the contribution of each viewpoint for training is not uniform and changes dynamically during the optimization process, which can result in suboptimal guidance for both geometric refinement and rendering quality. To address these limitations, we propose $R^2$-Mesh, a reinforcement learning framework that combines NeRF-rendered pseudo-supervision with online viewpoint selection. Our key insight is to exploit NeRF's rendering ability to synthesize additional high-quality images, enriching training with diverse viewpoint information. To ensure that supervision focuses on the most beneficial perspectives, we introduce a UCB-based strategy with a geometry-aware reward, which dynamically balances exploration and exploitation to identify informative viewpoints throughout training. Within this framework, we jointly optimize SDF geometry and view-dependent appearance under differentiable rendering, while periodically refining meshes to capture fine geometric details. Experiments demonstrate that our method achieves competitive results in both geometric accuracy and rendering quality.

Haoyang Wang, Liming Liu, Xinggong Zhang• 2024

Related benchmarks

TaskDatasetResultRank
3D ReconstructionNeRF-Synthetic (NS) standard (test)
PSNR29.55
11
Surface ReconstructionNeRF Synthetic
Chair Value3.3
11
Mesh ReconstructionDTU standard (test)
PSNR23.2
4
Surface ReconstructionDTU
CD (Scan 24)0.51
3
Showing 4 of 4 rows

Other info

Follow for update