Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ZeroVO: Visual Odometry with Minimal Assumptions

About

We introduce ZeroVO, a novel visual odometry (VO) algorithm that achieves zero-shot generalization across diverse cameras and environments, overcoming limitations in existing methods that depend on predefined or static camera calibration setups. Our approach incorporates three main innovations. First, we design a calibration-free, geometry-aware network structure capable of handling noise in estimated depth and camera parameters. Second, we introduce a language-based prior that infuses semantic information to enhance robust feature extraction and generalization to previously unseen domains. Third, we develop a flexible, semi-supervised training paradigm that iteratively adapts to new scenes using unlabeled data, further boosting the models' ability to generalize across diverse real-world scenarios. We analyze complex autonomous driving contexts, demonstrating over 30% improvement against prior methods on three standard benchmarks, KITTI, nuScenes, and Argoverse 2, as well as a newly introduced, high-fidelity synthetic dataset derived from Grand Theft Auto (GTA). By not requiring fine-tuning or camera calibration, our work broadens the applicability of VO, providing a versatile solution for real-world deployment at scale.

Lei Lai, Zekai Yin, Eshed Ohn-Bar• 2025

Related benchmarks

TaskDatasetResultRank
Visual OdometryKITTI 10Hz (00-10)
Translational Error6.81
8
Visual OdometrynuScenes 12Hz (unseen regions)
Translation Error (m)9.74
8
Visual OdometryArgoverse 10Hz 2 (unseen camera setups)
Translational Error (t_err)4.64
8
Showing 3 of 3 rows

Other info

Follow for update