Information Gain-based Policy Optimization: A Simple and Effective Approach for Multi-Turn Search Agents
About
Large language model (LLM)-based agents are increasingly trained with reinforcement learning (RL) to enhance their ability to interact with external environments through tool use, particularly in search-based settings that require multi-turn reasoning and knowledge acquisition. However, existing approaches typically rely on outcome-based rewards that are only provided exclusively upon generating the final answer. This reward sparsity becomes particularly problematic in multi-turn settings, where long trajectories exacerbate three critical issues: (i) advantage collapse, where all rollouts receive identical rewards and provide no useful learning signals; (ii) lack of fine-grained credit assignment, where the correctness of intermediate turns is obscured, especially in long-horizon tasks; and (iii) poor sample efficiency, where each rollout yields only a single outcome signal, leading to low data utilization. In this paper, we propose Information Gain-based Policy Optimization (IGPO), a simple yet effective RL framework that provides dense and intrinsic supervision for multi-turn agent training. IGPO models each interaction turn as an incremental process of acquiring information about the ground truth, and defines turn-level rewards as the marginal increase in the policy's probability of producing the correct answer. Unlike prior process-level reward approaches that depend on external reward models or costly Monte Carlo estimation, IGPO derives intrinsic rewards directly from the model's own belief updates. These intrinsic turn-level rewards are combined with outcome-level supervision to form dense reward signals. Extensive experiments on both in-domain and out-of-domain benchmarks demonstrate that IGPO consistently outperforms strong baselines in multi-turn scenarios, achieving higher accuracy and improved data efficiency. Our code is available at https://github.com/GuoqingWang1/IGPO.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | 2Wiki | F172.1 | 152 | |
| Question Answering | Bamboogle | -- | 120 | |
| Single-hop Question Answering | PopQA | -- | 104 | |
| Question Answering | 2WikiMultiHopQA (test) | F172.1 | 81 | |
| Single-hop Question Answering | TriviaQA | -- | 81 | |
| Multi-hop QA | HotpotQA | -- | 76 | |
| Question Answering | Natural Questions (NQ) (test) | -- | 68 | |
| Multi-hop QA | MuSiQue | EM31.4 | 65 | |
| Multi-hop Question Answering | Multi-Hop QA (HotpotQA, 2Wiki, Musique, Bamboogle) | HotpotQA Score57.2 | 48 | |
| Question Answering | NQ | F1 Score (NQ)46.4 | 31 |