RoMa: Robust Dense Feature Matching
About
Feature matching is an important computer vision task that involves estimating correspondences between two images of a 3D scene, and dense methods estimate all such correspondences. The aim is to learn a robust model, i.e., a model able to match under challenging real-world changes. In this work, we propose such a model, leveraging frozen pretrained features from the foundation model DINOv2. Although these features are significantly more robust than local features trained from scratch, they are inherently coarse. We therefore combine them with specialized ConvNet fine features, creating a precisely localizable feature pyramid. To further improve robustness, we propose a tailored transformer match decoder that predicts anchor probabilities, which enables it to express multimodality. Finally, we propose an improved loss formulation through regression-by-classification with subsequent robust regression. We conduct a comprehensive set of experiments that show that our method, RoMa, achieves significant gains, setting a new state-of-the-art. In particular, we achieve a 36% improvement on the extremely challenging WxBS benchmark. Code is provided at https://github.com/Parskatt/RoMa
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Relative Pose Estimation | MegaDepth 1500 | AUC @ 20°86.3 | 151 | |
| Relative Pose Estimation | ScanNet 1500 pairs (test) | AUC@5°28.9 | 56 | |
| Homography Estimation | HPatches | -- | 55 | |
| Retinal Image Alignment | FIRE | Acceptable Success Rate98.51 | 48 | |
| Pose Estimation | ScanNet | AUC @ 5 deg16.8 | 41 | |
| Pose Estimation | MegaDepth 1500 (test) | AUC @ 5°62.6 | 38 | |
| Retinal Image Alignment | FLORI21 | Acceptable Rate93.33 | 35 | |
| Retinal Image Alignment | KBSMC | Acceptable Rate33.42 | 35 | |
| Pose Estimation | RE10K | AUC @ 5°0.546 | 35 | |
| Aerial Visual Localization | UAVD4L LoD (in-Traj.) | Accuracy (2m-2°)93.27 | 33 |