Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MDE-VIO: Enhancing Visual-Inertial Odometry Using Learned Depth Priors

About

Traditional monocular Visual-Inertial Odometry (VIO) systems struggle in low-texture environments where sparse visual features are insufficient for accurate pose estimation. To address this, dense Monocular Depth Estimation (MDE) has been widely explored as a complementary information source. While recent Vision Transformer (ViT) based complex foundational models offer dense, geometrically consistent depth, their computational demands typically preclude them from real-time edge deployment. Our work bridges this gap by integrating learned depth priors directly into the VINS-Mono optimization backend. We propose a novel framework that enforces affine-invariant depth consistency and pairwise ordinal constraints, explicitly filtering unstable artifacts via variance-based gating. This approach strictly adheres to the computational limits of edge devices while robustly recovering metric scale. Extensive experiments on the TartanGround and M3ED datasets demonstrate that our method prevents divergence in challenging scenarios and delivers significant accuracy gains, reducing Absolute Trajectory Error (ATE) by up to 28.3%. Code will be made available.

Arda Alniak, Sinan Kalkan, Mustafa Mert Ankarali, Afsar Saranli, Abdullah Aydin Alatan• 2026

Related benchmarks

TaskDatasetResultRank
Visual-Inertial OdometryTartanGround Downtown
P2001 ATE RMSE0.27
13
Visual-Inertial OdometryM3ED
Accuracy (Hard Sequence)57
11
Showing 2 of 2 rows

Other info

Follow for update