Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Who Handles Orientation? Investigating Invariance in Feature Matching

About

Finding matching keypoints between images is a core problem in 3D computer vision. However, modern matchers struggle with large in-plane rotations. A straightforward mitigation is to learn rotation invariance via data augmentation. However, it remains unclear at which stage rotation invariance should be incorporated. In this paper, we study this in the context of a modern sparse matching pipeline. We perform extensive experiments by training on a large collection of 3D vision datasets and evaluating on popular image matching benchmarks. Surprisingly, we find that incorporating rotation invariance already in the descriptor yields similar performance to handling it in the matcher. However, rotation invariance is achieved earlier in the matcher when it is learned in the descriptor, allowing for a faster rotation-invariant matcher. Further, we find that enforcing rotation invariance does not hurt upright performance when trained at scale. Finally, we study the emergence of rotation invariance through scale and find that increasing the training data size substantially improves generalization to rotated images. We release two matchers robust to in-plane rotations that achieve state-of-the-art performance on e.g. multi-modal (WxBS), extreme (HardMatch), and satellite image matching (SatAst). Code is available at https://github.com/davnords/loma.

David Nordstr\"om, Johan Edstedt, Fredrik Kahl, Georg B\"okman• 2026

Related benchmarks

TaskDatasetResultRank
Relative Pose EstimationMegaDepth 1500
AUC @ 20°83.4
151
Feature MatchingWxBS
mAA (10px)71.2
30
Relative Pose EstimationScanNet 1500
AUC@5°28.4
30
Image MatchingWxBS Rotated
mAA@10px62.8
6
Image MatchingSatAst
AUC@10°47.7
6
Image MatchingHardMatch Rotated
mAA (5px)37.4
6
Image MatchingHardMatch Non-rotated
mAA@5px37.9
6
Showing 7 of 7 rows

Other info

Follow for update