Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SimKO: Simple Pass@K Policy Optimization

About

Reinforcement learning with verifiable rewards (RLVR) has advanced the reasoning capabilities of large language models (LLMs). However, prevailing RLVR methods exhibit a systematic bias toward exploitation over exploration, as evidenced by improved pass@1 but reduced pass@K (K>1) performance. To understand this issue, we analyze training dynamics of RLVR methods by tracking the token-level probability distributions over vocabulary candidates. Our analysis reveals a consistent probability concentration effect where the top-1 candidate increasingly accumulates probability mass and suppresses that of other candidates. More importantly, stronger over-concentration correlates with worse pass@K performance. Inspired by this finding, we propose Simple Pass@K Optimization (SimKO), a method designed to mitigate the over-concentration issue, thereby encouraging exploration. SimKO operates in an asymmetrical manner. For verified-correct responses, it boosts the probabilities of the top-K candidates. For verified-incorrect responses, it applies stronger penalties to the top-1 candidate. We observe that this asymmetric design is particularly effective at mitigating over-concentration when applied at tokens with high entropy. Across various math and logical-reasoning benchmarks, SimKO consistently yields higher pass@K for a wide range of K, providing a simple way to improve RLVR's exploration.

Ruotian Peng, Yi Ren, Zhouliang Yu, Weiyang Liu, Yandong Wen• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@156.2
171
Code GenerationLiveCodeBench
Rate @32 Score46.5
17
Mathematical ReasoningOlympiadBench (test)
@1 Success Rate31.8
15
Mathematical Problem SolvingAIME 2024 and 2025 (test)
Accuracy23.33
12
Mathematical ReasoningAIME25 (test)
Pass@127.5
8
Mathematical ReasoningOmniMath (test)
Top-1 Accuracy0.435
8
Showing 6 of 6 rows

Other info

Follow for update