Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Curriculum Direct Preference Optimization for Diffusion and Consistency Models

About

Direct Preference Optimization (DPO) has been proposed as an effective and efficient alternative to reinforcement learning from human feedback (RLHF). In this paper, we propose a novel and enhanced version of DPO based on curriculum learning for text-to-image generation. Our method is divided into two training stages. First, a ranking of the examples generated for each prompt is obtained by employing a reward model. Then, increasingly difficult pairs of examples are sampled and provided to a text-to-image generative (diffusion or consistency) model. Generated samples that are far apart in the ranking are considered to form easy pairs, while those that are close in the ranking form hard pairs. In other words, we use the rank difference between samples as a measure of difficulty. The sampled pairs are split into batches according to their difficulty levels, which are gradually used to train the generative model. Our approach, Curriculum DPO, is compared against state-of-the-art fine-tuning approaches on nine benchmarks, outperforming the competing methods in terms of text alignment, aesthetics and human preference. Our code is available at https://github.com/CroitoruAlin/Curriculum-DPO.

Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, Nicu Sebe, Mubarak Shah• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationDataset D1 (test)
Text Alignment0.7703
14
Text-to-Image GenerationDataset D2 (test)
Text Alignment0.6234
14
Text AlignmentUser Study
Average Ranking3.175
12
AestheticsHuman Evaluation Study
Average Rating Score3.664
8
Text-to-Image GenerationPick-a-Pic D3
Text Alignment0.5413
4
Showing 5 of 5 rows

Other info

Code

Follow for update