Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReSearch: Learning to Reason with Search for LLMs via Reinforcement Learning

About

Large Language Models (LLMs) have shown remarkable capabilities in reasoning, exemplified by the success of OpenAI-o1 and DeepSeek-R1. However, integrating reasoning with external search processes remains challenging, especially for complex multi-hop questions requiring multiple retrieval steps. We propose ReSearch, a novel framework that trains LLMs to Reason with Search via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning. We train ReSearch on Qwen2.5-7B(-Instruct) and Qwen2.5-32B(-Instruct) models and conduct extensive experiments. Despite being trained on only one dataset, our models demonstrate strong generalizability across various benchmarks. Analysis reveals that ReSearch naturally elicits advanced reasoning capabilities such as reflection and self-correction during the reinforcement learning process.

Mingyang Chen, Linzhuang Sun, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z. Pan, Wen Zhang, Huajun Chen, Fan Yang, Zenan Zhou, Weipeng Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy49
535
Multi-hop Question Answering2WikiMultihopQA
EM27.2
278
Multi-hop Question AnsweringHotpotQA (test)
F175.3
198
Mathematical ReasoningAMC 23
Accuracy7.5
198
Multi-hop Question Answering2WikiMultiHopQA (test)--
143
Mathematical ReasoningAIME24
Accuracy0.00e+0
130
Multi-hop Question AnsweringMuSiQue (test)
F146
111
Multi-hop Question AnsweringMuSiQue
EM7.4
106
Mathematical ReasoningGSM8K--
102
Multi-hop Question AnsweringBamboogle
Exact Match12.8
97
Showing 10 of 44 rows

Other info

Follow for update