Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
About
Retrieval-Augmented Generation (RAG) mitigates hallucination in LLMs by incorporating external knowledge, but relies on chunk-based retrieval that lacks structural semantics. GraphRAG methods improve RAG by modeling knowledge as entity-relation graphs, but still face challenges in high construction cost, fixed one-time retrieval, and reliance on long-context reasoning and prompt design. To address these challenges, we propose Graph-R1, an agentic GraphRAG framework via end-to-end reinforcement learning (RL). It introduces lightweight knowledge hypergraph construction, models retrieval as a multi-turn agent-environment interaction, and optimizes the agent process via an end-to-end reward mechanism. Experiments on standard RAG datasets show that Graph-R1 outperforms traditional GraphRAG and RL-enhanced RAG methods in reasoning accuracy, retrieval efficiency, and generation quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-hop Question Answering | HotpotQA (test) | F162.69 | 198 | |
| Multi-hop Question Answering | 2WikiMultiHopQA (test) | EM55.47 | 143 | |
| Multi-hop Question Answering | MuSiQue (test) | F146.17 | 111 | |
| Single-hop Question Answering | Natural Questions (NQ) (test) | EM33.59 | 16 |