Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SetPO: Set-Level Policy Optimization for Diversity-Preserving LLM Reasoning

About

Reinforcement learning with verifiable rewards has shown notable effectiveness in enhancing large language models (LLMs) reasoning performance, especially in mathematics tasks. However, such improvements often come with reduced outcome diversity, where the model concentrates probability mass on a narrow set of solutions. Motivated by diminishing-returns principles, we introduce a set level diversity objective defined over sampled trajectories using kernelized similarity. Our approach derives a leave-one-out marginal contribution for each sampled trajectory and integrates this objective as a plug-in advantage shaping term for policy optimization. We further investigate the contribution of a single trajectory to language model diversity within a distribution perturbation framework. This analysis theoretically confirms a monotonicity property, proving that rarer trajectories yield consistently higher marginal contributions to the global diversity. Extensive experiments across a range of model scales demonstrate the effectiveness of our proposed algorithm, consistently outperforming strong baselines in both Pass@1 and Pass@K across various benchmarks.

Chenyi Li, Yuan Zhang, Bo Wang, Guoqing Ma, Wei Tang, Haoyang Huang, Nan Duan• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningCollegeMATH--
161
Mathematical ReasoningMATH 500
pass@172.9
153
Mathematical ReasoningGSM8K
pass@193
102
Mathematical ReasoningAIME 25
pass@19.7
65
Mathematical ReasoningAIME 24
Pass@125.3
59
Mathematical ReasoningAMC 23
Pass@153.2
46
Mathematical ReasoningAMC23
Pass@162.3
43
Mathematical ReasoningMATH500
Pass@180.8
41
Mathematical ReasoningAIME 24
Pass@113.4
39
Mathematical ReasoningAIME25
Pass@113.6
11
Showing 10 of 10 rows

Other info

Follow for update