Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning

About

Existing Large Reasoning Models (LRMs) have shown the potential of reinforcement learning (RL) to enhance the complex reasoning capabilities of Large Language Models~(LLMs). While they achieve remarkable performance on challenging tasks such as mathematics and coding, they often rely on their internal knowledge to solve problems, which can be inadequate for time-sensitive or knowledge-intensive questions, leading to inaccuracies and hallucinations. To address this, we propose \textbf{R1-Searcher}, a novel two-stage outcome-based RL approach designed to enhance the search capabilities of LLMs. This method allows LLMs to autonomously invoke external search systems to access additional knowledge during the reasoning process. Our framework relies exclusively on RL, without requiring process rewards or distillation for a cold start. % effectively generalizing to out-of-domain datasets and supporting both Base and Instruct models. Our experiments demonstrate that our method significantly outperforms previous strong RAG methods, even when compared to the closed-source GPT-4o-mini.

Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, Ji-Rong Wen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy67.6
535
Multi-hop Question Answering2WikiMultihopQA
EM35.8
387
Multi-hop Question AnsweringHotpotQA--
294
Multi-hop Question AnsweringHotpotQA (test)
F150.69
255
Mathematical ReasoningAMC 23
Accuracy37.5
198
Multi-hop Question Answering2WikiMultiHopQA (test)
EM27.34
195
Multi-hop Question AnsweringMuSiQue
EM18.6
185
Mathematical ReasoningAIME24
Accuracy95
160
Multi-hop Question Answering2Wiki
Exact Match58.3
152
Question Answering2Wiki
F150.64
152
Showing 10 of 126 rows
...

Other info

Follow for update