Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MaskFocus: Focusing Policy Optimization on Critical Steps for Masked Image Generation

About

Reinforcement learning (RL) has demonstrated significant potential for post-training language models and autoregressive visual generative models, but adapting RL to masked generative models remains challenging. The core factor is that policy optimization requires accounting for the probability likelihood of each step due to its multi-step and iterative refinement process. This reliance on entire sampling trajectories introduces high computational cost, whereas natively optimizing random steps often yields suboptimal results. In this paper, we present MaskFocus, a novel RL framework that achieves effective policy optimization for masked generative models by focusing on critical steps. Specifically, we determine the step-level information gain by measuring the similarity between the intermediate images at each sampling step and the final generated image. Crucially, we leverage this to identify the most critical and valuable steps and execute focused policy optimization on them. Furthermore, we design a dynamic routing sampling mechanism based on entropy to encourage the model to explore more valuable masking strategies for samples with low entropy. Extensive experiments on multiple Text-to-Image benchmarks validate the effectiveness of our method.

Guohui Zhang, Hu Yu, Xiaoxiao Ma, Yaning Pan, Hang Xu, Feng Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Two Objects91
87
Text-to-Image GenerationHuman Preference Evaluation Set
DEQA4.39
6
Showing 2 of 2 rows

Other info

Follow for update