IntentRL: Training Proactive User-intent Agents for Open-ended Deep Research via Reinforcement Learning
About
Deep Research (DR) agents extend Large Language Models (LLMs) beyond parametric knowledge by autonomously retrieving and synthesizing evidence from large web corpora into long-form reports, enabling a long-horizon agentic paradigm. However, unlike real-time conversational assistants, DR is computationally expensive and time-consuming, creating an autonomy-interaction dilemma: high autonomy on ambiguous user queries often leads to prolonged execution with unsatisfactory outcomes. To address this, we propose IntentRL, a framework that trains proactive agents to clarify latent user intents before starting long-horizon research. To overcome the scarcity of open-ended research data, we introduce a scalable pipeline that expands a few seed samples into high-quality dialogue turns via a shallow-to-deep intent refinement graph. We further adopt a two-stage reinforcement learning (RL) strategy: Stage I applies RL on offline dialogues to efficiently learn general user-interaction behavior, while Stage II uses the trained agent and a user simulator for online rollouts to strengthen adaptation to diverse user feedback. Extensive experiments show that IntentRL significantly improves both intent hit rate and downstream task performance, outperforming the built-in clarify modules of closed-source DR agents and proactive LLM baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Deep Research Report Generation | DeepResearch Bench | Comprehensiveness43.1 | 54 | |
| Deep Research Report Generation | PDR-Bench | P-Score7.21 | 22 | |
| Deep Research Report Generation | Rigorous Bench | Quality0.6247 | 22 | |
| Clarification Generation | DeepResearch Bench online interactive settings | Intent Precision36.44 | 6 | |
| Clarification Generation | DeepResearch Bench offline (test) | Quality Score2.43 | 4 |