Graph-to-Frame RAG: Visual-Space Knowledge Fusion for Training-Free and Auditable Video Reasoning
About
When video reasoning requires external knowledge, many systems with large multimodal models (LMMs) adopt retrieval augmentation to supply the missing context. Appending textual or multi-clip evidence, however, forces heterogeneous signals into a single attention space. We observe diluted attention and higher cognitive load even on non-long videos. The bottleneck is not only what to retrieve but how to represent and fuse external knowledge with the video backbone.We present Graph-to-Frame RAG (G2F-RAG), a training free and auditable paradigm that delivers knowledge in the visual space. On the offline stage, an agent builds a problem-agnostic video knowledge graph that integrates entities, events, spatial relations, and linked world knowledge. On the online stage, a hierarchical multi-agent controller decides whether external knowledge is needed, retrieves a minimal sufficient subgraph, and renders it as a single reasoning frame appended to the video. LMMs then perform joint reasoning in a unified visual domain. This design reduces cognitive load and leaves an explicit, inspectable evidence trail.G2F-RAG is plug-and-play across backbones and scales. It yields consistent gains on diverse public benchmarks, with larger improvements in knowledge-intensive settings. Ablations further confirm that knowledge representation and delivery matter. G2F-RAG reframes retrieval as visual space knowledge fusion for robust and interpretable video reasoning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-modal Video Understanding | MVBench | -- | 70 | |
| Video Understanding | MMBench-Video | Mean Score (0-3)1.85 | 23 | |
| General Multi-task Video Understanding | VideoMME w/o sub | Average Accuracy72 | 22 | |
| Open World Video Understanding | VideoMMMU | Average Accuracy71.2 | 19 | |
| Temporal spatial reasoning | VSIBench | Average Accuracy62.9 | 19 | |
| Temporal spatial reasoning | TempCompass | Average Accuracy76.8 | 17 | |
| Open World Video Understanding | WildVideo | Average Accuracy63.2 | 14 |