Diversity-Incentivized Exploration for Versatile Reasoning
About
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a crucial paradigm for incentivizing reasoning capabilities in Large Language Models (LLMs). Due to vast state-action spaces and reward sparsity in reasoning tasks, existing methods often struggle with deficient exploration and poor sample efficiency. In the paper, we propose \textbf{DIVER} (\textbf{D}iversity-\textbf{I}ncentivized Exploration for \textbf{V}ersatil\textbf{E} \textbf{R}easoning), an innovative framework that highlights the pivotal role of global sequence-level diversity to incentivize deep exploration for versatile reasoning. We first conduct a primary empirical study to reveal a strong positive correlation between global diversity and reasoning capacity. Building on this insight, we introduce global diversity incentives as an intrinsic reward to promote deep exploration in a semantically structured space. Incorporating the intrinsic reward, we develop a potential-based reward shaping mechanism to preserve optimal policy invariance and design simple heuristics to mitigate possible reward hacking. Experimental results show that DIVER outperforms competitive RLVR baselines with various exploration strategies on both in-domain and out-of-domain tasks, excelling in both Pass@1 and Pass@k evaluations. Our code is available at https://github.com/NJU-RL/DIVER.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | AIME 2024 | Accuracy51.1 | 251 | |
| Science Reasoning | GPQA | Accuracy59.1 | 218 | |
| Commonsense Reasoning | ARC-C | Accuracy91.1 | 51 | |
| Mathematical Reasoning | AIME 2025 | Accuracy36.9 | 37 | |
| Mathematical Reasoning | In-Distribution Reasoning Performance Suite (AIME, AMC, MATH-500, Minerva, Olympiad) | AIME 2024 Score23.8 | 30 | |
| Mathematical Reasoning | Minerva Math | Accuracy36.8 | 28 | |
| Mathematical Reasoning | Olympiad | Accuracy (%)61.2 | 21 | |
| Multi-task Language Understanding | MMLU-Pro | Accuracy56.6 | 14 | |
| Mathematical Reasoning | Out-of-Domain Mathematical Reasoning Suite ARC-c, GPQA, MMLU-Pro | ARC-c Score84.1 | 10 | |
| Mathematical Reasoning | AMC | Accuracy82 | 8 |