Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RoMa: Robust Dense Feature Matching

About

Feature matching is an important computer vision task that involves estimating correspondences between two images of a 3D scene, and dense methods estimate all such correspondences. The aim is to learn a robust model, i.e., a model able to match under challenging real-world changes. In this work, we propose such a model, leveraging frozen pretrained features from the foundation model DINOv2. Although these features are significantly more robust than local features trained from scratch, they are inherently coarse. We therefore combine them with specialized ConvNet fine features, creating a precisely localizable feature pyramid. To further improve robustness, we propose a tailored transformer match decoder that predicts anchor probabilities, which enables it to express multimodality. Finally, we propose an improved loss formulation through regression-by-classification with subsequent robust regression. We conduct a comprehensive set of experiments that show that our method, RoMa, achieves significant gains, setting a new state-of-the-art. In particular, we achieve a 36% improvement on the extremely challenging WxBS benchmark. Code is provided at https://github.com/Parskatt/RoMa

Johan Edstedt, Qiyu Sun, Georg B\"okman, M{\aa}rten Wadenb\"ack, Michael Felsberg• 2023

Related benchmarks

TaskDatasetResultRank
Relative Pose EstimationMegaDepth 1500
AUC @ 20°86.3
151
Relative Pose EstimationScanNet 1500 pairs (test)
AUC@5°28.9
56
Homography EstimationHPatches--
55
Retinal Image AlignmentFIRE
Acceptable Success Rate98.51
48
Pose EstimationScanNet
AUC @ 5 deg16.8
41
Pose EstimationMegaDepth 1500 (test)
AUC @ 5°62.6
38
Retinal Image AlignmentFLORI21
Acceptable Rate93.33
35
Retinal Image AlignmentKBSMC
Acceptable Rate33.42
35
Pose EstimationRE10K
AUC @ 5°0.546
35
Aerial Visual LocalizationUAVD4L LoD (in-Traj.)
Accuracy (2m-2°)93.27
33
Showing 10 of 76 rows
...

Other info

Code

Follow for update