Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Motion-o: Trajectory-Grounded Video Reasoning

About

Recent research has made substantial progress on video reasoning, with many models leveraging spatio-temporal evidence chains to strengthen their inference capabilities. At the same time, a growing set of datasets and benchmarks now provides structured annotations designed to support and evaluate such reasoning. However, little attention has been paid to reasoning about \emph{how} objects move between observations: no prior work has articulated the motion patterns by connecting successive observations, leaving trajectory understanding implicit and difficult to verify. We formalize this missing capability as Spatial-Temporal-Trajectory (STT) reasoning and introduce \textbf{Motion-o}, a motion-centric video understanding extension to visual language models that makes trajectories explicit and verifiable. To enable motion reasoning, we also introduce a trajectory-grounding dataset artifact that expands sparse keyframe supervision via augmentation to yield denser bounding box tracks and a stronger trajectory-level training signal. Finally, we introduce Motion Chain of Thought (MCoT), a structured reasoning pathway that makes object trajectories through discrete \texttt{<motion/>} tag summarizing per-object direction, speed, and scale (of velocity) change to explicitly connect grounded observations into trajectories. To train Motion-o, we design a reward function that compels the model to reason directly over visual evidence, all while requiring no architectural modifications. Empirical results demonstrate that Motion-o improves spatial-temporal grounding and trajectory prediction while remaining fully compatible with existing frameworks, establishing motion reasoning as a critical extension for evidence-based video understanding. Code is available at https://github.com/ostadabbas/Motion-o.

Bishoy Galoaa, Shayda Moezzi, Xiangyu Bai, Sarah Ostadabbas• 2026

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench
Accuracy69.2
425
Video UnderstandingVideoMME
Score (Long)60.3
248
Video UnderstandingWorldSense
Score41.5
25
Spatio-Temporal ReasoningV-STAR (test)
What Accuracy64.1
15
Temporal Video GroundingTVGBench (test)
mIoU39.6
10
Video Motion ReasoningMotionBench (dev)
Overall Accuracy63
7
Showing 6 of 6 rows

Other info

Follow for update