Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CLEVRER: CoLlision Events for Video REpresentation and Reasoning

About

The ability to reason about temporal and causal events from videos lies at the core of human intelligence. Most video reasoning benchmarks, however, focus on pattern recognition from complex visual and language input, instead of on causal structure. We study the complementary problem, exploring the temporal and causal structures behind videos of objects with simple visual appearance. To this end, we introduce the CoLlision Events for Video REpresentation and Reasoning (CLEVRER), a diagnostic video dataset for systematic evaluation of computational models on a wide range of reasoning tasks. Motivated by the theory of human casual judgment, CLEVRER includes four types of questions: descriptive (e.g., "what color"), explanatory ("what is responsible for"), predictive ("what will happen next"), and counterfactual ("what if"). We evaluate various state-of-the-art models for visual reasoning on our benchmark. While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations. We also study an oracle model that explicitly combines these components via symbolic representations.

Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum• 2019

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringSTAR (test)
Interaction Score33.25
42
Narrative ReasoningMMIU (test)
BLEURT Score0.288
14
Narrative ReasoningMSR-VTT (test)
Accuracy Score3.61
14
Narrative ReasoningWebQA (test)
BLEURT0.608
14
Narrative ReasoningVIST (test)
BLEURT0.439
14
Narrative ReasoningEgo4D (test)
BLEURT0.465
14
Narrative ReasoningPororo (test)
BLEURT Score43.5
14
Temporal and causal video reasoningCLEVRER-Humans (test)
Accuracy (Per Option)51
12
Visual Situated ReasoningSTAR-QA (test)
Accuracy @I33.25
10
Visual Question AnsweringCLEVRER 1.0 (test)
Descriptive Accuracy0.881
8
Showing 10 of 11 rows

Other info

Follow for update