Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UCPO: Uncertainty-Aware Policy Optimization

About

The key to building trustworthy Large Language Models (LLMs) lies in endowing them with inherent uncertainty expression capabilities to mitigate the hallucinations that restrict their high-stakes applications. However, existing RL paradigms such as GRPO often suffer from Advantage Bias due to binary decision spaces and static uncertainty rewards, inducing either excessive conservatism or overconfidence. To tackle this challenge, this paper unveils the root causes of reward hacking and overconfidence in current RL paradigms incorporating uncertainty-based rewards, based on which we propose the UnCertainty-Aware Policy Optimization (UCPO) framework. UCPO employs Ternary Advantage Decoupling to separate and independently normalize deterministic and uncertain rollouts, thereby eliminating advantage bias. Furthermore, a Dynamic Uncertainty Reward Adjustment mechanism is introduced to calibrate uncertainty weights in real-time according to model evolution and instance difficulty. Experimental results in mathematical reasoning and general tasks demonstrate that UCPO effectively resolves the reward imbalance, significantly improving the reliability and calibration of the model beyond their knowledge boundaries.

Xianzhou Zeng, Jing Huang, Chunmei Xie, Gongrui Nan, Siye Chen, Mengyu Lu, Weiqi Xiong, Qixuan Zhou, Junhao Zhang, Qiang Zhu, Yadong Li, Xingzhong Xu• 2026

Related benchmarks

TaskDatasetResultRank
General TasksGPQA Diamond
PAQ Score0.677
14
General TasksMMLU-Redux2
PAQ91.67
14
Math and Text ReasoningAIME 24
PAQ86.11
14
Math and Text ReasoningMATH 500
PAQ97.28
14
Math and Text ReasoningOlympiad Bench
PAQ73.67
14
Math and Text ReasoningAMC
PAQ91.95
14
Math and Text ReasoningMinerva
PAQ0.4915
14
Showing 7 of 7 rows

Other info

Follow for update