Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

About

Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at https://github.com/hlzz/DeepMatchVO.

Tianwei Shen, Zixin Luo, Lei Zhou, Hanyu Deng, Runze Zhang, Tian Fang, Long Quan• 2019

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI Raw Eigen (test)
RMSE4.36
159
Visual OdometryKITTI Seq. 09
Translation Error (%)9.91
20
Visual OdometryKITTI Seq. 10
Translational Error (%)12.18
20
Visual OdometryKITTI Odometry Seq. 09 (test)
et (%)9.91
6
Visual OdometryKITTI Odometry Seq. 10 (test)
Translational Error (%)12.18
6
Visual OdometryKITTI sequence 09
Median Translation Error (m)18.36
5
Camera pose estimationKITTI odometry (Seq. 10)--
5
Showing 7 of 7 rows

Other info

Code

Follow for update