Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Differentiable Diffusion for Dense Depth Estimation from Multi-view Images

About

We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision. We optimize point positions, depths, and weights with respect to the loss by differential splatting that models points as Gaussians with analytic transmittance. Further, we develop an efficient optimization routine that can simultaneously optimize the 50k+ points required for complex scene reconstruction. We validate our routine using ground truth data and show high reconstruction quality. Then, we apply this to light field and wider baseline images via self supervision, and show improvements in both average and outlier error for depth maps diffused from inaccurate sparse points. Finally, we compare qualitative and quantitative results to image processing and deep learning methods. http://visual.cs.brown.edu/diffdiffdepth

Numair Khan, Min H. Kim, James Tompkin• 2021

Related benchmarks

TaskDatasetResultRank
Disparity EstimationLiving Room synthetic light field
MSE0.2
6
Disparity EstimationPiano synthetic light field
Mean Squared Error (MSE)0.0871
6
Disparity EstimationDino light field (synthetic)
MSE0.0086
6
Disparity EstimationBoxes synthetic light field
MSE0.0917
6
Disparity EstimationSideboard synthetic light field
MSE0.0223
6
Disparity EstimationCotton (synthetic light field)
MSE0.0307
6
Multi-view StereoLiving Room-MVS
MSE0.17
4
Multi-view StereoPiano-MVS
MSE0.69
4
Showing 8 of 8 rows

Other info

Code

Follow for update