Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

wd1: Weighted Policy Optimization for Reasoning in Diffusion Language Models

About

Improving the reasoning capabilities of diffusion-based large language models (dLLMs) through reinforcement learning (RL) remains an open problem. The intractability of dLLMs likelihood function necessitates approximating the current, old, and reference policy likelihoods at each policy optimization step. This reliance introduces additional computational overhead, and can lead to large variance and estimation error in RL objective -- particularly in computing the policy ratio for importance sampling. To mitigate these issues, we introduce wd1, a novel ratio-free policy optimization approach that reformulates the RL objective as a weighted log-likelihood, requiring only a single approximation for the current parametrized policy likelihood. We formally show that our proposed method can be interpreted as energy-guided discrete diffusion training combined with negative sample unlearning, thereby confirming its theoretical soundness. In experiments on LLaDA-8B model, wd1 outperforms diffusion-based GRPO (d1) while requiring lower computational cost, achieving up to a $+59\%$ improvement in accuracy. Furthermore, we extend wd1 to denoising-stepwise weighted policy optimization (wd1++), achieving state-of-the-art math performance of $44.2\%$ on MATH500 and $84.5\%$ on GSM8K with only 20 RL training steps.

Xiaohang Tang, Rares Dolga, Sangwoong Yoon, Ilija Bogunovic• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy84.5
983
Code GenerationHumanEval--
850
Mathematical ReasoningGSM8K (test)
Accuracy82.3
797
Mathematical ReasoningMATH500 (test)
Accuracy39
381
Code GenerationHumanEval+ (test)
Pass@132.9
81
PlanningCountdown
Accuracy51.2
68
PlanningSudoku
Accuracy39.2
68
Mathematical ReasoningCOUNTDOWN (test)
Accuracy51.2
36
MathMATH (test)
Accuracy50.5
36
MathematicsGSM8K 0 (test)
Accuracy82.9
32
Showing 10 of 27 rows

Other info

Follow for update