Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Graph-to-Frame RAG: Visual-Space Knowledge Fusion for Training-Free and Auditable Video Reasoning

About

When video reasoning requires external knowledge, many systems with large multimodal models (LMMs) adopt retrieval augmentation to supply the missing context. Appending textual or multi-clip evidence, however, forces heterogeneous signals into a single attention space. We observe diluted attention and higher cognitive load even on non-long videos. The bottleneck is not only what to retrieve but how to represent and fuse external knowledge with the video backbone.We present Graph-to-Frame RAG (G2F-RAG), a training free and auditable paradigm that delivers knowledge in the visual space. On the offline stage, an agent builds a problem-agnostic video knowledge graph that integrates entities, events, spatial relations, and linked world knowledge. On the online stage, a hierarchical multi-agent controller decides whether external knowledge is needed, retrieves a minimal sufficient subgraph, and renders it as a single reasoning frame appended to the video. LMMs then perform joint reasoning in a unified visual domain. This design reduces cognitive load and leaves an explicit, inspectable evidence trail.G2F-RAG is plug-and-play across backbones and scales. It yields consistent gains on diverse public benchmarks, with larger improvements in knowledge-intensive settings. Ablations further confirm that knowledge representation and delivery matter. G2F-RAG reframes retrieval as visual space knowledge fusion for robust and interpretable video reasoning.

Songyuan Yang, Weijiang Yu, Ziyu Liu, Guijian Tang, Wenjing Yang, Huibin Tan, Nong Xiao• 2026

Related benchmarks

TaskDatasetResultRank
Multi-modal Video UnderstandingMVBench--
70
Video UnderstandingMMBench-Video
Mean Score (0-3)1.85
23
General Multi-task Video UnderstandingVideoMME w/o sub
Average Accuracy72
22
Open World Video UnderstandingVideoMMMU
Average Accuracy71.2
19
Temporal spatial reasoningVSIBench
Average Accuracy62.9
19
Temporal spatial reasoningTempCompass
Average Accuracy76.8
17
Open World Video UnderstandingWildVideo
Average Accuracy63.2
14
Showing 7 of 7 rows

Other info

Follow for update