Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LogicGaze: Benchmarking Causal Consistency in Visual Narratives via Counterfactual Verification

About

While sequential reasoning enhances the capability of Vision-Language Models (VLMs) to execute complex multimodal tasks, their reliability in grounding these reasoning chains within actual visual evidence remains insufficiently explored. We introduce LogicGaze, a novel benchmark framework designed to rigorously interrogate whether VLMs can validate sequential causal chains against visual inputs, specifically targeting the pervasive issue of hallucination. Curated from 40,000 video segments from ShareGPT4Video and a subset of Flickr30k imagery, LogicGaze integrates causal sequences with visually contradictory yet linguistically plausible perturbations, compelling models to verify the authenticity of each reasoning step. Our tripartite evaluation protocol - Causal Validation, Grounded Narrative Synthesis, and Perturbation Rejection - exposes significant vulnerabilities in state-of-the-art VLMs such as Qwen2.5-VL-72B. LogicGaze advocates for robust, trustworthy multimodal reasoning, with all resources publicly available in an anonymized repository.

Rory Driscoll, Alexandros Christoforos, Chadbourne Davis• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy72.2
749
Question AnsweringOBQA
Accuracy87.9
276
Multi-hop Question AnsweringHotpotQA
F1 Score64.9
221
Question AnsweringPopQA
Accuracy68.4
186
Question Answering2Wiki
F161.3
75
Question AnsweringARC-C
Accuracy0.71
68
Multi-hop Question Answering2Wiki
F1 Score47.5
41
Question AnsweringTQA
Accuracy73.8
34
Question AnsweringHotpotQA
F1 Score69.1
15
Showing 9 of 9 rows

Other info

Follow for update