Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Di3PO - Diptych Diffusion DPO for Targeted Improvements in Image Generation

About

Existing methods for preference tuning of text-to-image (T2I) diffusion models often rely on computationally expensive generation steps to create positive and negative pairs of images. These approaches frequently yield training pairs that either lack meaningful differences, are expensive to sample and filter, or exhibit significant variance in irrelevant pixel regions, thereby degrading training efficiency. To address these limitations, we introduce "Di3PO", a novel method for constructing positive and negative pairs that isolates specific regions targeted for improvement during preference tuning, while keeping the surrounding context in the image stable. We demonstrate the efficacy of our approach by applying it to the challenging task of text rendering in diffusion models, showcasing improvements over baseline methods of SFT and DPO.

Sanjana Reddy, Ishaan Malhi, Sally Ma, Praneet Dutta (2) __INSTITUTION_4__ Google, (2) Google DeepMind)• 2026

Related benchmarks

TaskDatasetResultRank
Text RenderingDiptych 1.0 (test)
Edit Distance0.164
12
Showing 1 of 1 rows

Other info

Follow for update