Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Approximation of Log-Partition Function in Policy Mirror Descent Induces Implicit Regularization for LLM Post-Training

About

Policy mirror descent (PMD) provides a principled framework for reinforcement learning (RL) by iteratively solving KL-regularized policy improvement subproblems. While this approach has been adopted in training advanced LLMs such as Kimi K1.5/K2, the ideal closed-form PMD updates require reliable partition function estimation, a significant challenge when working with limited rollouts in the vast action spaces of LLMs. We investigate a practical algorithm, termed PMD-mean, that approximates the log-partition term with the mean reward under the sampling policy and performs regression in log-policy space. Specifically, we characterize the population solution of PMD-mean and demonstrate that it implicitly optimizes mirror descent subproblems with an adaptive mixed KL--$\chi^2$ regularizer. This additional $\chi^2$ regularization constrains large probability changes, producing more conservative updates when expected rewards are low and enhancing robustness against finite-sample estimation errors. Experiments on math reasoning tasks show that PMD-mean achieves superior performance with improved stability and time efficiency. These findings deepen our understanding of PMD-mean and illuminate pathways toward principled improvements in RL algorithms for LLMs. Code is available at https://github.com/horizon-rl/OpenKimi.

Zhenghao Xu, Qin Lu, Changlong Yu, Tuo Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 2025
Avg@3237.19
27
Mathematical ReasoningAIME 2024
avg@3250.83
18
Mathematical ReasoningAIME 2024, 2025
Average Score44.01
13
Mathematical ReasoningAIME Average
Avg@3244.01
8
Showing 4 of 4 rows

Other info

GitHub

Follow for update