Universal Correspondence Network
About
We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with $O(n)$ feed forward passes for $n$ keypoints, instead of $O(n^2)$ for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL, and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic Correspondence | SPair-71k (test) | PCK@0.117.7 | 122 | |
| Semantic Correspondence | PF-WILLOW | PCK@0.1 (bbox)54 | 109 | |
| Semantic Correspondence | PF-Pascal (test) | PCK@0.175.1 | 106 | |
| Semantic keypoint transfer | PF-Pascal (test) | PCK @ 0.0529.9 | 35 | |
| Semantic Correspondence | PF-PASCAL | PCK@0.175.1 | 29 | |
| Semantic Correspondence | SPair-71k | PCK@0.117.7 | 24 | |
| Semantic Correspondence | CUB | PCK@0.152.1 | 14 |