Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Co-GRPO: Co-Optimized Group Relative Policy Optimization for Masked Diffusion Model

About

Recently, Masked Diffusion Models (MDMs) have shown promising potential across vision, language, and cross-modal generation. However, a notable discrepancy exists between their training and inference procedures. In particular, MDM inference is a multi-step, iterative process governed not only by the model itself but also by various schedules that dictate the token-decoding trajectory (e.g., how many tokens to decode at each step). In contrast, MDMs are typically trained using a simplified, single-step BERT-style objective that masks a subset of tokens and predicts all of them simultaneously. This step-level simplification fundamentally disconnects the training paradigm from the trajectory-level nature of inference, leaving the inference schedules never optimized during training. In this paper, we introduce Co-GRPO, which reformulates MDM generation as a unified Markov Decision Process (MDP) that jointly incorporates both the model and the inference schedule. By applying Group Relative Policy Optimization at the trajectory level, Co-GRPO cooperatively optimizes model parameters and schedule parameters under a shared reward, without requiring costly backpropagation through the multi-step generation process. This holistic optimization aligns training with inference more thoroughly and substantially improves generation quality. Empirical results across four benchmarks-ImageReward, HPS, GenEval, and DPG-Bench-demonstrate the effectiveness of our approach. For more details, please refer to our project page: https://co-grpo.github.io/ .

Renping Zhou, Zanlin Ni, Tianyi Chen, Zeyu Liu, Yang Yue, Yulin Wang, Yuxuan Wang, Jingshu Liu, Gao Huang• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationImageReward
ImageReward Score1.122
56
Text-to-Image GenerationHPS v2.0
Animation Score29.7
17
Text-to-Image GenerationGenEval zero-shot
GenEval Score0.55
8
Text-to-Image GenerationDPG-Bench zero-shot
DPG-Bench Score (Zero-Shot)70.1
5
Showing 4 of 4 rows

Other info

Follow for update