VIRD: View-Invariant Representation through Dual-Axis Transformation for Cross-View Pose Estimation
About
Accurate global localization is critical for autonomous driving and robotics, but GNSS-based approaches often degrade due to occlusion and multipath effects. As an emerging alternative, cross-view pose estimation predicts the 3-DoF camera pose corresponding to a ground-view image with respect to a geo-referenced satellite image. However, existing methods struggle to bridge the significant viewpoint gap between the ground and satellite views mainly due to limited spatial correspondences. We propose a novel cross-view pose estimation method that constructs view-invariant representations through dual-axis transformation (VIRD). VIRD first applies a polar transformation to the satellite view to facilitate horizontal correspondence, then uses context-enhanced positional attention on the ground and polar-transformed satellite features to mitigate vertical misalignment, explicitly bridging the viewpoint gap. To further strengthen view invariance, we introduce a view-reconstruction loss that encourages the derived representations to reconstruct the original and cross-view images. Experiments on the KITTI and VIGOR datasets demonstrate that VIRD outperforms the state-of-the-art methods without orientation priors, reducing median position and orientation errors by 50.7% and 76.5% on KITTI, and 18.0% and 46.8% on VIGOR, respectively.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Location and orientation estimation | VIGOR (Same-Area) | Location Mean Error (m)2.57 | 28 | |
| Location and orientation estimation | VIGOR (Cross-Area) | Location Mean Error (m)3.85 | 28 | |
| Position and Orientation Estimation | KITTI Cross-area | Position Lateral Recall R@1m (%)45.88 | 13 | |
| Position and Orientation Estimation | KITTI Same-area | Position Mean Error (m)5.47 | 7 |