Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Spatio-Temporal Grounding of Large Language Models from Perception Streams

About

Embodied-AI agents must reason about how objects move and interact in 3-D space over time, yet existing smaller frontier Large Language Models (LLMs) still mis-handle fine-grained spatial relations, metric distances, and temporal orderings. We introduce the general framework Formally Explainable Spatio-Temporal Scenes (FESTS) that injects verifiable spatio-temporal supervision into an LLM by compiling natural-language queries into Spatial Regular Expression (SpRE) -- a language combining regular expression syntax with S4u spatial logic and extended here with universal and existential quantification. The pipeline matches each SpRE against any structured video log and exports aligned (query, frames, match, explanation) tuples, enabling unlimited training data without manual labels. Training a 3-billion-parameter model on 27k such tuples boosts frame-level F1 from 48.5% to 87.5%, matching GPT-4.1 on complex spatio-temporal reasoning while remaining two orders of magnitude smaller, and, hence, enabling spatio-temporal intelligence for Video LLM.

Jacob Anderson, Bardh Hoxha, Georgios Fainekos, Hideki Okamoto, Danil Prokhorov• 2026

Related benchmarks

TaskDatasetResultRank
Spatio-Temporal ReasoningSpatio-temporal Reasoning Dataset 8 frames
Frame F1 (F1f)85.6
4
Spatio-Temporal ReasoningSpatio-temporal Reasoning Dataset (12 frames)
Frame F1 (F1f)85.2
4
Spatio-Temporal ReasoningSpatio-temporal Reasoning Dataset 16 frames
Frame F1 (F1f)82.7
4
Spatio-Temporal ReasoningSpatio-temporal Reasoning Dataset Overall
Frame F1 (F1f)87.5
4
Spatio-Temporal ReasoningSpatio-temporal Reasoning Dataset 4 frames
Frame F1 (F1f)88.1
4
Showing 5 of 5 rows

Other info

Follow for update