Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Thinking in Uncertainty: Mitigating Hallucinations in MLRMs with Latent Entropy-Aware Decoding

About

Recent advancements in multimodal large reasoning models (MLRMs) have significantly improved performance in visual question answering. However, we observe that transition words (e.g., because, however, and wait) are closely associated with hallucinations and tend to exhibit high-entropy states. We argue that adequate contextual reasoning information can be directly extracted from the token probability distribution. Inspired by superposed representation theory, we propose leveraging latent superposed reasoning to integrate multiple candidate semantics and maintain latent reasoning trajectories. The hypothesis is that reliance on discrete textual inputs may drive the model toward sequential explicit reasoning, underutilizing dense contextual cues during high-entropy reasoning stages. Therefore, we propose constructing rich semantic representations from the token probability distributions to enhance in-context reasoning. With this goal, we present Latent Entropy-Aware Decoding (LEAD), an efficient plug-and-play decoding strategy that leverages semantic context to achieve reliable reasoning. The heart of our method lies in entropy-aware reasoning mode switching. The model employs probability-weighted continuous embeddings under high-entropy states and transitions back to discrete token embeddings as entropy decreases. Moreover, we propose a prior-guided visual anchor injection strategy that encourages the model to focus on visual information. Extensive experiments show that LEAD effectively mitigates hallucinations across various MLRMs on multiple benchmarks.

Zhongxing Xu, Zhonghua Wang, Zhe Qian, Dachuan Shi, Feilong Tang, Ming Hu, Shiyan Su, Xiaocheng Zou, Wei Feng, Dwarikanath Mahapatra, Yifan Peng, Mingquan Lin, Zongyuan Ge• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Mathematical ReasoningMathVista
Accuracy76.3
257
Mathematical ReasoningMathVision
Accuracy33.1
144
Mathematical ReasoningMathVerse
Accuracy55.1
109
Geometric ReasoningGeometry3K
Accuracy@169.1
42
Visual ReasoningMMVP
Accuracy47.2
32
General Reasoning & UnderstandingVMCBench
Accuracy82.1
21
Hallucination assessmentMMHalu
MMHalu Score4.27
21
Hallucination assessmentBingo
Bingo Score3.85
21
General Reasoning & UnderstandingVSTAR
Accuracy81.7
21
Showing 10 of 14 rows

Other info

GitHub

Follow for update