Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grounding Image Matching in 3D with MASt3R

About

Image Matching is a core component of all best-performing algorithms and pipelines in 3D vision. Yet despite matching being fundamentally a 3D problem, intrinsically linked to camera pose and scene geometry, it is typically treated as a 2D problem. This makes sense as the goal of matching is to establish correspondences between 2D pixel fields, but also seems like a potentially hazardous choice. In this work, we take a different stance and propose to cast matching as a 3D task with DUSt3R, a recent and powerful 3D reconstruction framework based on Transformers. Based on pointmaps regression, this method displayed impressive robustness in matching views with extreme viewpoint changes, yet with limited accuracy. We aim here to improve the matching capabilities of such an approach while preserving its robustness. We thus propose to augment the DUSt3R network with a new head that outputs dense local features, trained with an additional matching loss. We further address the issue of quadratic complexity of dense matching, which becomes prohibitively slow for downstream applications if not carefully treated. We introduce a fast reciprocal matching scheme that not only accelerates matching by orders of magnitude, but also comes with theoretical guarantees and, lastly, yields improved results. Extensive experiments show that our approach, coined MASt3R, significantly outperforms the state of the art on multiple matching tasks. In particular, it beats the best published methods by 30% (absolute improvement) in VCRE AUC on the extremely challenging Map-free localization dataset.

Vincent Leroy, Yohann Cabon, J\'er\^ome Revaud• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes
mIoU58.9
578
Monocular Depth EstimationKITTI
Abs Rel0.077
161
Monocular Depth EstimationETH3D
AbsRel46.91
117
Monocular Depth EstimationNYU V2
Delta 1 Acc89.6
113
Video Depth EstimationSintel
Relative Error (Rel)0.327
109
Semantic CorrespondencePF-WILLOW
PCK@0.1 (bbox)42.1
109
Relative Pose EstimationMegaDepth 1500
AUC @ 5°42.4
104
Video Depth EstimationBONN
Relative Error (Rel)0.167
103
Monocular Depth EstimationDIODE
AbsRel54.9
93
Camera pose estimationSintel
ATE0.185
92
Showing 10 of 133 rows
...

Other info

Follow for update