Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Neuro-Symbolic Video Understanding

About

The unprecedented surge in video data production in recent years necessitates efficient tools to extract meaningful frames from videos for downstream tasks. Long-term temporal reasoning is a key desideratum for frame retrieval systems. While state-of-the-art foundation models, like VideoLLaMA and ViCLIP, are proficient in short-term semantic understanding, they surprisingly fail at long-term reasoning across frames. A key reason for this failure is that they intertwine per-frame perception and temporal reasoning into a single deep network. Hence, decoupling but co-designing semantic understanding and temporal reasoning is essential for efficient scene identification. We propose a system that leverages vision-language models for semantic understanding of individual frames but effectively reasons about the long-term evolution of events using state machines and temporal logic (TL) formulae that inherently capture memory. Our TL-based reasoning improves the F1 score of complex event identification by 9-15% compared to benchmarks that use GPT4 for reasoning on state-of-the-art self-driving datasets such as Waymo and NuScenes.

Minkyu Choi, Harsh Goel, Mohammad Omama, Yunhao Yang, Sahil Shah, Sandeep Chinchali• 2024

Related benchmarks

TaskDatasetResultRank
Narrative ReasoningMMIU (test)
BLEURT Score0.287
14
Narrative ReasoningWebQA (test)
BLEURT0.612
14
Narrative ReasoningEgo4D (test)
BLEURT0.471
14
Narrative ReasoningMSR-VTT (test)
Accuracy Score3.58
14
Narrative ReasoningVIST (test)
BLEURT0.442
14
Narrative ReasoningPororo (test)
BLEURT Score43.9
14
Showing 6 of 6 rows

Other info

Follow for update