Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning

About

Existing Large Reasoning Models (LRMs) have shown the potential of reinforcement learning (RL) to enhance the complex reasoning capabilities of Large Language Models~(LLMs). While they achieve remarkable performance on challenging tasks such as mathematics and coding, they often rely on their internal knowledge to solve problems, which can be inadequate for time-sensitive or knowledge-intensive questions, leading to inaccuracies and hallucinations. To address this, we propose \textbf{R1-Searcher}, a novel two-stage outcome-based RL approach designed to enhance the search capabilities of LLMs. This method allows LLMs to autonomously invoke external search systems to access additional knowledge during the reasoning process. Our framework relies exclusively on RL, without requiring process rewards or distillation for a cold start. % effectively generalizing to out-of-domain datasets and supporting both Base and Instruct models. Our experiments demonstrate that our method significantly outperforms previous strong RAG methods, even when compared to the closed-source GPT-4o-mini.

Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, Ji-Rong Wen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy67.6
535
Multi-hop Question Answering2WikiMultihopQA
EM35.8
278
Multi-hop Question AnsweringHotpotQA (test)
F146.36
198
Mathematical ReasoningAMC 23
Accuracy37.5
198
Multi-hop Question Answering2WikiMultiHopQA (test)
EM27.34
143
Mathematical ReasoningAIME24
Accuracy95
130
Question AnsweringHotpotQA
F156.7
114
Multi-hop Question AnsweringMuSiQue (test)
F116.63
111
Multi-hop Question AnsweringMuSiQue
EM18.6
106
Mathematical ReasoningGSM8K--
102
Showing 10 of 82 rows
...

Other info

Follow for update