Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Thinking Diffusion: Penalize and Guide Visual-Grounded Reasoning in Diffusion Multimodal Language Models

About

Diffusion large language models (dLLMs) are emerging as promising alternatives to autoregressive (AR) LLMs. Recently, this paradigm has been extended to multimodal tasks, leading to the development of diffusion multimodal large language models (dMLLMs). These models are expected to retain the reasoning capabilities of LLMs while enabling faster inference through parallel generation. However, when combined with Chain-of-Thought (CoT) reasoning, dMLLMs exhibit two critical issues. First, we observe that dMLLMs often generate the final answer token at a very early timestep. This trend indicates that the model determines the answer before sufficient reasoning, leading to degraded reasoning performance. Second, during the initial timesteps, dMLLMs show minimal dependency on visual prompts, exhibiting a fundamentally different pattern of visual information utilization compared to AR vision-language models. In summary, these findings indicate that dMLLMs tend to generate premature final answers without sufficiently grounding on visual inputs. To address these limitations, we propose Position and Step Penalty (PSP) and Visual Reasoning Guidance (VRG). PSP penalizes tokens in later positions during early timesteps, delaying premature answer generation and encouraging progressive reasoning across timesteps. VRG, inspired by classifier-free guidance, amplifies visual grounding signals to enhance the model's alignment with visual evidence. Extensive experiments across various dMLLMs demonstrate that our method achieves up to 7.5% higher accuracy while delivering more than 3x speedup compared to reasoning with four times more diffusion steps.

Keuntae Kim, Mingyu Kang, Yong Suk Choi• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Science Question AnsweringScienceQA IMG
Accuracy73.4
131
Visual Question AnsweringV*Bench
Accuracy46.6
84
Visual ReasoningMMBench
Accuracy75.3
48
Multimodal Chain-of-Thought ReasoningM3CoT
Accuracy50.5
42
Multimodal UnderstandingMME
Existence Score187
16
Multimodal UnderstandingLLaVA-Bench Coco
LLaVA-Bench Score20
4
Showing 6 of 6 rows

Other info

Follow for update