Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

KV-Tracker: Real-Time Pose Tracking with Transformers

About

Multi-view 3D geometry networks offer a powerful prior but are prohibitively slow for real-time applications. We propose a novel way to adapt them for online use, enabling real-time 6-DoF pose tracking and online reconstruction of objects and scenes from monocular RGB videos. Our method rapidly selects and manages a set of images as keyframes to map a scene or object via $\pi^3$ with full bidirectional attention. We then cache the global self-attention block's key-value (KV) pairs and use them as the sole scene representation for online tracking. This allows for up to $15\times$ speedup during inference without the fear of drift or catastrophic forgetting. Our caching strategy is model-agnostic and can be applied to other off-the-shelf multi-view networks without retraining. We demonstrate KV-Tracker on both scene-level tracking and the more challenging task of on-the-fly object tracking and reconstruction without depth measurements or object priors. Experiments on the TUM RGB-D, 7-Scenes, Arctic and OnePose datasets show the strong performance of our system while maintaining high frame-rates up to ${\sim}27$ FPS.

Marwan Taher, Ignacio Alzugaray, Kirill Mazur, Xin Kong, Andrew J. Davison• 2025

Related benchmarks

TaskDatasetResultRank
Camera Localization7 Scenes
Average Position Error (m)0.08
46
Object TrackingArctic Dataset
ATE RMSE (m)0.135
33
Object TrackingOnePose original (test)
Accuracy (1cm/1°)10.7
6
Object TrackingOnePose Low Texture original (test)
Acc (1cm, 1°)12.1
6
Camera TrackingTUM-RGBD
Sequence 360 Error0.166
5
Showing 5 of 5 rows

Other info

Follow for update