Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Restoring Exploration after Post-Training: Latent Exploration Decoding for Large Reasoning Models

About

Large Reasoning Models (LRMs) have recently achieved strong mathematical and code reasoning performance through Reinforcement Learning (RL) post-training. However, we show that modern reasoning post-training induces an unintended exploration collapse: temperature-based sampling no longer increases pass@$n$ accuracy. Empirically, the final-layer posterior of post-trained LRMs exhibit sharply reduced entropy, while the entropy of intermediate layers remains relatively high. Motivated by this entropy asymmetry, we propose Latent Exploration Decoding (LED), a depth-conditioned decoding strategy. LED aggregates intermediate posteriors via cumulative sum and selects depth configurations with maximal entropy as exploration candidates. Without additional training or parameters, LED consistently improves pass@1 and pass@16 accuracy by 0.61 and 1.03 percentage points across multiple reasoning benchmarks and models. Project page: https://GitHub.com/Xiaomi-Research/LED.

Wenhui Tan, Fiorenzo Parascandolo, Enver Sangineto, Jianzhong Ju, Zhenbo Luo, Qian Cao, Rita Cucchiara, Ruihua Song, Jian Luan• 2026

Related benchmarks

TaskDatasetResultRank
Scientific ReasoningGPQA Diamond
P@1691.94
21
CodingLiveCodeBench
Pass@169.11
15
Mathematical ReasoningAIME 2024
p@187.92
15
Mathematical ReasoningAIME 2025
P@181.04
15
Mathematical ReasoningMATH 500
P@198.3
15
Mathematical ReasoningGSM8K
p@1 Accuracy96.48
15
Showing 6 of 6 rows

Other info

Follow for update