Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Direct Sparse Odometry

About

We propose a novel direct sparse visual odometry formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry -- represented as inverse depth in a reference frame -- and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on mostly white walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.

Jakob Engel, Vladlen Koltun, Daniel Cremers• 2016

Related benchmarks

TaskDatasetResultRank
Visual OdometryTUM-RGBD
freiburg1/xyz Error3.8
34
TrackingTUM RGB-D 44 (various sequences)
Average Error11.1
28
Camera TrackingBONN dynamic sequences
Balloon Error7.3
25
SLAMROTIO
Success Rate (%)1
24
Camera TrackingTUM dynamic scene sequences RGB-D (test)
f3/w_s ATE (cm)1.5
17
TrackingEuRoC Dataset
MH 01 Score4.6
13
Monocular SLAMEuRoC (test)
ATE Error (MH03)0.172
12
Visual OdometryKITTI Odometry Sequence 04
RMSE0.422
11
TrackingWild-SLAM MoCap Dataset 1.0 (test)
Score (ANYmal2)2.5
11
Visual OdometryKITTI Odometry Sequence 05
RMSE47.4605
8
Showing 10 of 33 rows

Other info

Follow for update