Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explicit Correspondence Matching for Generalizable Neural Radiance Fields

About

We present a new generalizable NeRF method that is able to directly generalize to new unseen scenarios and perform novel view synthesis with as few as two source views. The key to our approach lies in the explicitly modeled correspondence matching information, so as to provide the geometry prior to the prediction of NeRF color and density for volume rendering. The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views, which is able to provide reliable cues about the surface geometry. Unlike previous methods where image features are extracted independently for each view, we consider modeling the cross-view interactions via Transformer cross-attention, which greatly improves the feature matching quality. Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density, demonstrating the effectiveness and superiority of our proposed method. The code and model are on our project page: https://donydchen.github.io/matchnerf

Yuedong Chen, Haofei Xu, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai• 2023

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisLLFF
PSNR22.3
124
Novel View SynthesisDTU (test)
PSNR26.91
82
Novel View SynthesisBlender
PSNR23.2
60
Novel View SynthesisShiny
PSNR20.77
28
Novel View SynthesisDTU 1 (test)
PSNR26.91
22
Novel View SynthesisReal Forward-facing 640 x 960 (test)
PSNR22.43
21
Novel View SynthesisNeRF Synthetic 800 x 800 (test)
PSNR23.2
21
Showing 7 of 7 rows

Other info

Follow for update