Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning

About

Large Language Models (LLMs) have shown remarkable capabilities in reasoning, exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating reasoning with external search processes remains challenging, especially for complex multi-hop questions requiring multiple retrieval steps. We propose ReSearch, a novel framework that trains LLMs to Reason with Search via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning. We train ReSearch on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct extensive experiments. Despite being trained on only one dataset, our models demonstrate strong generalizability across various benchmarks. Analysis reveals that ReSearch naturally elicits advanced reasoning capabilities such as reflection and self-correction during the reinforcement learning process.

Mingyang Chen, Linzhuang Sun, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z. Pan, Wen Zhang, Huajun Chen, Fan Yang, Zenan Zhou, Weipeng Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy49
535
Multi-hop Question Answering2WikiMultihopQA
EM27.2
387
Multi-hop Question AnsweringHotpotQA--
294
Multi-hop Question AnsweringHotpotQA (test)
F175.3
255
Mathematical ReasoningAMC 23
Accuracy7.5
198
Multi-hop Question Answering2WikiMultiHopQA (test)
EM27.2
195
Question AnsweringPopQA--
186
Multi-hop Question AnsweringMuSiQue
EM17.8
185
Question AnsweringTriviaQA
EM59.4
182
Mathematical ReasoningAIME24
Accuracy0.00e+0
160
Showing 10 of 82 rows
...

Other info

Follow for update