Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multimodal Fact-Level Attribution for Verifiable Reasoning

About

Multimodal large language models (MLLMs) are increasingly used for real-world tasks involving multi-step reasoning and long-form generation, where reliability requires grounding model outputs in heterogeneous input sources and verifying individual factual claims. However, existing multimodal grounding benchmarks and evaluation methods focus on simplified, observation-based scenarios or limited modalities and fail to assess attribution in complex multimodal reasoning. We introduce MuRGAt (Multimodal Reasoning with Grounded Attribution), a benchmark for evaluating fact-level multimodal attribution in settings that require reasoning beyond direct observation. Given inputs spanning video, audio, and other modalities, MuRGAt requires models to generate answers with explicit reasoning and precise citations, where each citation specifies both modality and temporal segments. To enable reliable assessment, we introduce an automatic evaluation framework that strongly correlates with human judgments. Benchmarking with human and automated scores reveals that even strong MLLMs frequently hallucinate citations despite correct reasoning. Moreover, we observe a key trade-off: increasing reasoning depth or enforcing structured grounding often degrades accuracy, highlighting a significant gap between internal reasoning and verifiable attribution.

David Wan, Han Wang, Ziyang Wang, Elias Stengel-Eskin, Hyunji Lee, Mohit Bansal• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Fact-Level AttributionWorldSense 1.0 (sampled examples)--
24
Multimodal Fact-Level AttributionVideo-MMMU 1.0 (sampled examples)--
24
Attribution CoverageHuman Judgments
Pearson Correlation (r)0.97
8
Attribution PrecisionHuman Judgments
Pearson Correlation (r)0.65
4
Metric Correlation with Human JudgmentHuman-annotated Atomic Fact Attribution (test)
Coverage97
4
MURGAT-SCOREHuman Judgments
Pearson Correlation (r)0.86
4
Visual Question AnsweringWorldSense sampled examples 1.0--
4
Visual Question AnsweringVideo-MMMU 1.0 (sampled examples)--
4
Showing 8 of 8 rows

Other info

GitHub

Follow for update