Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OP-GRPO: Efficient Off-Policy GRPO for Flow-Matching Models

About

Post training via GRPO has demonstrated remarkable effectiveness in improving the generation quality of flow-matching models. However, GRPO suffers from inherently low sample efficiency due to its on-policy training paradigm. To address this limitation, we present OP-GRPO, the first Off-Policy GRPO framework tailored for flow-matching models. First, we actively select high-quality trajectories and adaptively incorporate them into a replay buffer for reuse in subsequent training iterations. Second, to mitigate the distribution shift introduced by off-policy samples, we propose a sequence-level importance sampling correction that preserves the integrity of GRPO's clipping mechanism while ensuring stable policy updates. Third, we theoretically and empirically show that late denoising steps yield ill-conditioned off-policy ratios, and mitigate this by truncating trajectories at late steps. Across image and video generation benchmarks, OP-GRPO achieves comparable or superior performance to Flow-GRPO with only 34.2% of the training steps on average, yielding substantial gains in training efficiency while maintaining generation quality.

Liyu Zhang, Kehan Li, Tingrui Han, Tao Zhao, Yuxuan Sheng, Shibo He, Chao Li• 2026

Related benchmarks

TaskDatasetResultRank
Compositional text-to-image generationT2I-CompBench++ (test)
Color84.5
17
Compositional Image GenerationEvalGEN
EvalGen Score0.96
5
Human Preference AlignmentHuman Preference Alignment
PickScore23.64
5
Showing 3 of 3 rows

Other info

Follow for update