Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforcement Learning

About

Effectively retrieving, reasoning and understanding visually rich information remains a challenge for RAG methods. Traditional text-based methods cannot handle visual-related information. On the other hand, current vision-based RAG approaches are often limited by fixed pipelines and frequently struggle to reason effectively due to the insufficient activation of the fundamental capabilities of models. As RL has been proven to be beneficial for model reasoning, we introduce VRAG-RL, a novel RL framework tailored for complex reasoning across visually rich information. With this framework, VLMs interact with search engines, autonomously sampling single-turn or multi-turn reasoning trajectories with the help of visual perception tokens and undergoing continual optimization based on these samples. Our approach highlights key limitations of RL in RAG domains: (i) Prior Multi-modal RAG approaches tend to merely incorporate images into the context, leading to insufficient reasoning token allocation and neglecting visual-specific perception; and (ii) When models interact with search engines, their queries often fail to retrieve relevant information due to the inability to articulate requirements, thereby leading to suboptimal performance. To address these challenges, we define an action space tailored for visually rich inputs, with actions including cropping and scaling, allowing the model to gather information from a coarse-to-fine perspective. Furthermore, to bridge the gap between users' original inquiries and the retriever, we employ a simple yet effective reward that integrates query rewriting and retrieval performance with a model-based reward. Our VRAG-RL optimizes VLMs for RAG tasks using specially designed RL strategies, aligning the model with real-world applications. The code is available at https://github.com/Alibaba-NLP/VRAG.

Qiuchen Wang, Ruixue Ding, Yu Zeng, Zehui Chen, Lin Chen, Shihang Wang, Pengjun Xie, Fei Huang, Feng Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Long-context document understandingMMLongBench-Doc
Accuracy26.6
58
Visual Question AnsweringSlideVQA
Overall Accuracy73.41
46
Multimodal Document Question AnsweringMMLongBench-Doc
Overall Accuracy35.97
44
Document Visual Question AnsweringMMLongBench-Doc
Accuracy31.55
34
Visual Information Retrieval and ReasoningViDoSeek
Overall Accuracy67.76
18
Long-context Multi-modal UnderstandingMMLongBench
Text Accuracy26.1
17
Visual Question AnsweringViDoSeek
Single Accuracy0.6465
14
Long-context document understandingLongDocURL
Accuracy44.9
14
Multimodal Document ReasoningSlideVQA, MMLongBench-Doc, and ViDoSeek
Average Score46.36
14
Video Document SeekingViDoSeek
Single Score24.81
14
Showing 10 of 10 rows

Other info

Follow for update