Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GTASA: Ground Truth Annotations for Spatiotemporal Analysis, Evaluation and Training of Video Models

About

Generating complex multi-actor scenario videos remains difficult even for state-of-the-art neural generators, while evaluating them is hard due to the lack of ground truth for physical plausibility and semantic faithfulness. We introduce GTASA, a corpus of multi-actor videos with per-frame spatial relation graphs and event-level temporal mappings, and the system that produced it based on Graphs of Events in Space and Time (GEST): GEST-Engine. We compare our method with both open and closed source neural generators and prove both qualitatively (human evaluation of physical validity and semantic alignment) and quantitatively (via training video captioning models) the clear advantages of our method. Probing four frozen video encoders across 11 spatiotemporal reasoning tasks enabled by GTASA's exact 3D ground truth reveals that self-supervised encoders encode spatial structure significantly better than VLM visual encoders.

Nicolae Cudlenco, Mihai Masala, Marius Leordeanu• 2026

Related benchmarks

TaskDatasetResultRank
Video descriptionHuman Study GEST prompts
Similarity Score56.64
3
Video GenerationHuman Evaluation Study Aggregated across video generation categories
Validity Rate69
3
Showing 2 of 2 rows

Other info

Follow for update