Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ContextDesc: Local Descriptor Augmentation with Cross-Modality Context

About

Most existing studies on learning local features focus on the patch-based descriptions of individual keypoints, whereas neglecting the spatial relations established from their keypoint locations. In this paper, we go beyond the local detail representation by introducing context awareness to augment off-the-shelf local feature descriptors. Specifically, we propose a unified learning framework that leverages and aggregates the cross-modality contextual information, including (i) visual context from high-level image representation, and (ii) geometric context from 2D keypoint distribution. Moreover, we propose an effective N-pair loss that eschews the empirical hyper-parameter search and improves the convergence. The proposed augmentation scheme is lightweight compared with the raw local feature description, meanwhile improves remarkably on several large-scale benchmarks with diversified scenes, which demonstrates both strong practicality and generalization ability in geometric matching applications.

Zixin Luo, Tianwei Shen, Lei Zhou, Jiahui Zhang, Yao Yao, Shiwei Li, Tian Fang, Long Quan• 2019

Related benchmarks

TaskDatasetResultRank
Homography EstimationHPatches
Overall Accuracy (< 1px)40.9
59
Homography EstimationHPatches (viewpoint)
Accuracy (<1px)29.6
27
Image MatchingHPatches (full)
MMA (Viewpoint)0.657
21
Local Feature MatchingHPatches Viewpoint v1.0
MMAscore65.7
12
Local Feature MatchingHPatches Overall v1.0
MMAscore63.6
12
Homography EstimationHPatches Illumination Change
Accuracy @ 1px0.531
12
Local Feature MatchingHPatches Illumination v1.0
MMAscore61.3
12
Showing 7 of 7 rows

Other info

Follow for update