Right Question is Already Half the Answer: Fully Unsupervised LLM Reasoning Incentivization
About
Existing methods to enhance the reasoning capability of large language models predominantly rely on supervised fine-tuning (SFT) followed by reinforcement learning (RL) on reasoning-specific data. These approaches critically depend on external supervisions--such as labeled reasoning traces, verified golden answers, or pre-trained reward models. In this work, we propose Entropy Minimized Policy Optimization (\ours), which makes an early attempt at fully unsupervised LLM reasoning incentivization. By continuously minimizing the predictive entropy of LLMs on unlabeled questions in a latent semantic space, \ours achieves competitive performance compared to supervised counterparts on both mathematical and free-form natural reasoning tasks. Specifically, without any supervised signals, \ours boosts the accuracy of Qwen2.5-Math-7B Base from 30.7\% to 48.1\% on mathematical benchmarks and improves the accuracy of Qwen2.5-7B Base from 32.1\% to 50.1\% on MMLU-Pro. Primary experiments and analysis are also provided to interpret the effectiveness of \ours. Code is available at https://github.com/QingyangZhang/EMPO.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K (test) | Accuracy83.2 | 900 | |
| GUI Grounding | ScreenSpot Pro | Accuracy20.7 | 163 | |
| GUI Grounding | ScreenSpot | Avg Acc69.2 | 133 | |
| GUI Grounding | OSWorld-G | Average Score42.6 | 107 | |
| GUI Grounding | ScreenSpot (test) | Element Accuracy83 | 42 | |
| Mathematical Reasoning | AIME 2024 | Accuracy @1615.8 | 36 | |
| Fine grained classification | Pets (test) | Accuracy70.4 | 29 | |
| Mathematical Reasoning | AIME 2025 | Avg@1612.3 | 28 | |
| Scientific Reasoning | GPQA | avg@1636 | 28 | |
| Mathematical Reasoning | AMC 2023 | Avg@16 Score60.2 | 28 |