Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MVSNet: Depth Inference for Unstructured Multi-view Stereo

About

We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.

Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, Long Quan• 2018

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationDDAD (test)
RMSE8.21
122
Multi-view StereoTanks and Temples Intermediate set
Mean F1 Score43.48
110
Multi-view StereoDTU (test)
Accuracy39.6
61
Multi-view StereoDTU 1 (evaluation)
Accuracy Error (mm)0.396
51
Multi-view StereoTanks&Temples
Family55.99
46
Multi-view StereoTanks & Temples Intermediate
F-score43.48
43
Multi-view Depth EstimationDDAD (test)
AbsRel0.112
40
Multi-view Stereo ReconstructionDTU (evaluation)
Mean Distance (mm) - Acc.0.396
35
3D ReconstructionDTU
Average Error2.38
32
Multi-view Depth EstimationScanNet (test)
Abs Rel0.094
23
Showing 10 of 23 rows

Other info

Code

Follow for update