Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Right Question is Already Half the Answer: Fully Unsupervised LLM Reasoning Incentivization

About

Existing methods to enhance the reasoning capability of large language models predominantly rely on supervised fine-tuning (SFT) followed by reinforcement learning (RL) on reasoning-specific data. These approaches critically depend on external supervisions--such as labeled reasoning traces, verified golden answers, or pre-trained reward models. In this work, we propose Entropy Minimized Policy Optimization (\ours), which makes an early attempt at fully unsupervised LLM reasoning incentivization. By continuously minimizing the predictive entropy of LLMs on unlabeled questions in a latent semantic space, \ours achieves competitive performance compared to supervised counterparts on both mathematical and free-form natural reasoning tasks. Specifically, without any supervised signals, \ours boosts the accuracy of Qwen2.5-Math-7B Base from 30.7\% to 48.1\% on mathematical benchmarks and improves the accuracy of Qwen2.5-7B Base from 32.1\% to 50.1\% on MMLU-Pro. Primary experiments and analysis are also provided to interpret the effectiveness of \ours. Code is available at https://github.com/QingyangZhang/EMPO.

Qingyang Zhang, Haitao Wu, Changqing Zhang, Peilin Zhao, Yatao Bian• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy83.2
900
GUI GroundingScreenSpot Pro
Accuracy20.7
163
GUI GroundingScreenSpot
Avg Acc69.2
133
GUI GroundingOSWorld-G
Average Score42.6
107
GUI GroundingScreenSpot (test)
Element Accuracy83
42
Mathematical ReasoningAIME 2024
Accuracy @1615.8
36
Fine grained classificationPets (test)
Accuracy70.4
29
Mathematical ReasoningAIME 2025
Avg@1612.3
28
Scientific ReasoningGPQA
avg@1636
28
Mathematical ReasoningAMC 2023
Avg@16 Score60.2
28
Showing 10 of 18 rows

Other info

Follow for update