SeLaR: Selective Latent Reasoning in Large Language Models
About
Chain-of-Thought (CoT) has become a cornerstone of reasoning in large language models, yet its effectiveness is constrained by the limited expressiveness of discrete token sampling. Recent latent reasoning approaches attempt to alleviate this limitation by replacing discrete tokens with soft embeddings (probability-weighted mixtures of token embeddings) or hidden states, but they commonly suffer from two issues: (1) global activation injects perturbations into high-confidence steps, impairing reasoning stability; and (2) soft embeddings quickly collapse toward the highest-probability token, limiting exploration of alternative trajectories. To address these challenges, we propose SeLaR (Selective Latent Reasoning), a lightweight and training-free framework. SeLaR introduces an entropy-gated mechanism that activates soft embeddings only at low-confidence steps, while preserving discrete decoding at high-confidence steps. Additionally, we propose an entropy-aware contrastive regularization that pushes soft embeddings away from the dominant (highest-probability) token's direction, encouraging sustained exploration of multiple latent reasoning paths. Experiments on five reasoning benchmarks demonstrate that SeLaR consistently outperforms standard CoT and state-of-the-art training-free methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Science Reasoning | GPQA | Accuracy40.91 | 243 | |
| Mathematical Reasoning | AIME 2024 | Pass@1 Accuracy83.33 | 165 | |
| Mathematical Reasoning | AIME 24 | Accuracy46.67 | 154 | |
| Mathematical Reasoning | AIME 2025 | Pass@1 Accuracy80 | 118 | |
| Science Reasoning | GPQA | Pass@167.17 | 50 | |
| Mathematical Reasoning | GSM8K | Accuracy96.06 | 43 |