Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QTrack: Query-Driven Reasoning for Multi-modal MOT

About

Multi-object tracking (MOT) has traditionally focused on estimating trajectories of all objects in a video, without selectively reasoning about user-specified targets under semantic instructions. In this work, we introduce a query-driven tracking paradigm that formulates tracking as a spatiotemporal reasoning problem conditioned on natural language queries. Given a reference frame, a video sequence, and a textual query, the goal is to localize and track only the target(s) specified in the query while maintaining temporal coherence and identity consistency. To support this setting, we construct RMOT26, a large-scale benchmark with grounded queries and sequence-level splits to prevent identity leakage and enable robust evaluation of generalization. We further present QTrack, an end-to-end vision-language model that integrates multimodal reasoning with tracking-oriented localization. Additionally, we introduce a Temporal Perception-Aware Policy Optimization strategy with structured rewards to encourage motion-aware reasoning. Extensive experiments demonstrate the effectiveness of our approach for reasoning-centric, language-guided tracking. Code and data are available at https://github.com/gaash-lab/QTrack

Tajamul Ashraf, Tavaheed Tariq, Sonia Yadav, Abrar Ul Riyaz, Wasif Tak, Moloud Abdar, Janibul Bashir• 2026

Related benchmarks

TaskDatasetResultRank
Multiple Object TrackingMOT17 (test)
MOTA69
1020
Multi-Object TrackingDanceTrack (test)
HOTA0.66
471
Reasoning-based Multi-Object TrackingRMOT26
MCP30
14
Showing 3 of 3 rows

Other info

Follow for update