Rethinking the Design of Reinforcement Learning-Based Deep Research Agents
About
Large language models (LLMs) augmented with external tools are increasingly deployed as deep research agents that gather, reason over, and synthesize web information to answer complex queries. Although recent open-source systems achieve strong empirical performance via reinforcement learning from web interactions, the impact of key design choices remains under-explored. We formalize deep research as reinforcement learning in an episodic finite Markov decision process and construct a competitive baseline agent grounded in this formulation. Building on this foundation, we systematically examine critical design decisions at both training and inference time and identify four factors that substantially improve performance: replacing rule-based rewards with AI feedback from an LLM judge, fine-tuning with the on-policy RLOO algorithm instead of the off-policy GRPO algorithm, filtering low-quality training samples, and employing an error-tolerant test-time rollout strategy. Together, these design choices yield a deep research agent that establishes state-of-the-art performance among 7B-scale agents when evaluated across ten widely used benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | 2Wiki | -- | 75 | |
| Question Answering | NQ | Performance97.8 | 20 | |
| Deep Research | 2Wiki (test) | Mean Correct Rate0.92 | 8 | |
| Deep Research | Natural Questions (NQ) (test) | Accuracy97.8 | 8 | |
| Deep Research | BAM (test) | Mean Correct Rate92.8 | 8 | |
| Deep Research | MuSiQue (MUS) (test) | Mean Correct Answer Rate81 | 8 | |
| Deep Research | Human-Level English (HLE) (test) | Mean Correct Answer Rate17.6 | 8 | |
| Deep Research | GAIA (test) | Mean Correct Rate49.2 | 8 | |
| Deep Research | BC (test) | Mean Correct Answer Rate620 | 8 | |
| Question Answering | BAM | Performance Score92.8 | 8 |