Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SeLaR: Selective Latent Reasoning in Large Language Models

About

Chain-of-Thought (CoT) has become a cornerstone of reasoning in large language models, yet its effectiveness is constrained by the limited expressiveness of discrete token sampling. Recent latent reasoning approaches attempt to alleviate this limitation by replacing discrete tokens with soft embeddings (probability-weighted mixtures of token embeddings) or hidden states, but they commonly suffer from two issues: (1) global activation injects perturbations into high-confidence steps, impairing reasoning stability; and (2) soft embeddings quickly collapse toward the highest-probability token, limiting exploration of alternative trajectories. To address these challenges, we propose SeLaR (Selective Latent Reasoning), a lightweight and training-free framework. SeLaR introduces an entropy-gated mechanism that activates soft embeddings only at low-confidence steps, while preserving discrete decoding at high-confidence steps. Additionally, we propose an entropy-aware contrastive regularization that pushes soft embeddings away from the dominant (highest-probability) token's direction, encouraging sustained exploration of multiple latent reasoning paths. Experiments on five reasoning benchmarks demonstrate that SeLaR consistently outperforms standard CoT and state-of-the-art training-free methods.

Renyu Fu, Guibo Luo• 2026

Related benchmarks

TaskDatasetResultRank
Science ReasoningGPQA
Accuracy40.91
243
Mathematical ReasoningAIME 2024
Pass@1 Accuracy83.33
165
Mathematical ReasoningAIME 24
Accuracy46.67
154
Mathematical ReasoningAIME 2025
Pass@1 Accuracy80
118
Science ReasoningGPQA
Pass@167.17
50
Mathematical ReasoningGSM8K
Accuracy96.06
43
Showing 6 of 6 rows

Other info

Follow for update