Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Better Generalization: Joint Depth-Pose Learning without PoseNet

About

In this work, we tackle the essential problem of scale inconsistency for self-supervised joint depth-pose learning. Most existing methods assume that a consistent scale of depth and pose can be learned across all input samples, which makes the learning problem harder, resulting in degraded performance and limited generalization in indoor environments and long-sequence visual odometry application. To address this issue, we propose a novel system that explicitly disentangles scale from the network estimation. Instead of relying on PoseNet architecture, our method recovers relative pose by directly solving fundamental matrix from dense optical flow correspondence and makes use of a two-view triangulation module to recover an up-to-scale 3D structure. Then, we align the scale of the depth prediction with the triangulated point cloud and use the transformed depth map for depth error computation and dense reprojection check. Our whole system can be jointly trained end-to-end. Extensive experiments show that our system not only reaches state-of-the-art performance on KITTI depth and flow estimation, but also significantly improves the generalization ability of existing self-supervised depth-pose learning methods under a variety of challenging scenarios, and achieves state-of-the-art results among self-supervised learning-based methods on KITTI Odometry and NYUv2 dataset. Furthermore, we present some interesting findings on the limitation of PoseNet-based relative pose estimation methods in terms of generalization ability. Code is available at https://github.com/B1ueber2y/TrianFlow.

Wang Zhao, Shaohui Liu, Yezhi Shu, Yong-Jin Liu• 2020

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI (Eigen)
Abs Rel0.113
502
Optical Flow EstimationKITTI 2015 (train)--
431
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)70.1
423
Monocular Depth EstimationNYU v2 (test)
Abs Rel0.189
257
Monocular Depth EstimationKITTI
Abs Rel0.113
161
Monocular Depth EstimationKITTI 2015 (Eigen split)
Abs Rel0.113
95
Monocular Depth EstimationKITTI Improved GT (Eigen)
AbsRel0.113
92
Depth EstimationScanNet (test)
Abs Rel0.179
65
Single-view depth estimationNYUv2 36 (test)
AbsRel0.189
21
Single-view depth estimationNYU official 654 images v2 (test)
AbsRel0.189
21
Showing 10 of 19 rows

Other info

Code

Follow for update