Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

360DVO: Deep Visual Odometry for Monocular 360-Degree Camera

About

Monocular omnidirectional visual odometry (OVO) systems leverage 360-degree cameras to overcome field-of-view limitations of perspective VO systems. However, existing methods, reliant on handcrafted features or photometric objectives, often lack robustness in challenging scenarios, such as aggressive motion and varying illumination. To address this, we present 360DVO, the first deep learning-based OVO framework. Our approach introduces a distortion-aware spherical feature extractor (DAS-Feat) that adaptively learns distortion-resistant features from 360-degree images. These sparse feature patches are then used to establish constraints for effective pose estimation within a novel omnidirectional differentiable bundle adjustment (ODBA) module. To facilitate evaluation in realistic settings, we also contribute a new real-world OVO benchmark. Extensive experiments on this benchmark and public synthetic datasets (TartanAir V2 and 360VO) demonstrate that 360DVO surpasses state-of-the-art baselines (including 360VO and OpenVSLAM), improving robustness by 50% and accuracy by 37.5%. Homepage: https://chris1004336379.github.io/360DVO-homepage

Xiaopeng Guo, Yinzhe Xu, Huajian Huang, Sai-Kit Yeung• 2026

Related benchmarks

TaskDatasetResultRank
Visual OdometryTartanAirV2 CountryHouse Easy
ATE (m)0.004
13
Visual OdometryTartanAir CountryHouse Hard V2
ATE (m)0.005
10
Visual OdometryTartanAir Average V2 (Evaluation Set)
Success Rate1
7
Visual OdometryTartanAir OldTownNight Easy V2
ATE (m)0.039
6
Visual OdometryTartanAir VictorianStreet Easy V2
ATE (m)0.025
6
Visual OdometryTartanAir VictorianStreet Hard V2
ATE (m)0.02
5
Visual OdometryTartanAir OldTownNight Hard V2
ATE (m)0.145
4
Visual Odometry360VO synthetic dataset (test)
ATE1.11
3
Showing 8 of 8 rows

Other info

Follow for update