Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models

About

Diffusion large language models (dLLMs) are emerging as an efficient alternative to autoregressive models due to their ability to decode multiple tokens in parallel. However, aligning dLLMs with human preferences or task-specific rewards via reinforcement learning (RL) is challenging because their intractable log-likelihood precludes the direct application of standard policy gradient methods. While prior work uses surrogates like the evidence lower bound (ELBO), these one-sided approximations can introduce significant policy gradient bias. To address this, we propose the Sandwiched Policy Gradient (SPG) that leverages both an upper and a lower bound of the true log-likelihood. Experiments show that SPG significantly outperforms baselines based on ELBO or one-step estimation. Specifically, SPG improves the accuracy over state-of-the-art RL methods for dLLMs by 3.6% in GSM8K, 2.6% in MATH500, 18.4% in Countdown and 27.0% in Sudoku.

Chenyu Wang, Paria Rashidinejad, DiJia Su, Song Jiang, Sid Wang, Siyan Zhao, Cai Zhou, Shannon Zejiang Shen, Feiyu Chen, Tommi Jaakkola, Yuandong Tian, Bo Liu• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K--
1326
Commonsense ReasoningWinoGrande
Accuracy84.3
1017
Code GenerationHumanEval
Pass@174.6
1012
Physical Commonsense ReasoningPIQA
Accuracy80.9
498
Code GenerationHumanEval+
Pass@169.1
356
Mathematical ReasoningMATH
Accuracy34.1
338
Science ReasoningGPQA
Accuracy25.9
227
Common Sense ReasoningHellaSwag
Accuracy83.1
207
Code GenerationMBPP+
Pass@169.2
206
Code GenerationMBPP
Pass@180.2
153
Showing 10 of 19 rows

Other info

GitHub

Follow for update