Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation

About

We present a comprehensive solution to learn and improve text-to-image models from human preference feedback. To begin with, we build ImageReward -- the first general-purpose text-to-image human preference reward model -- to effectively encode human preferences. Its training is based on our systematic annotation pipeline including rating and ranking, which collects 137k expert comparisons to date. In human evaluation, ImageReward outperforms existing scoring models and metrics, making it a promising automatic metric for evaluating text-to-image synthesis. On top of it, we propose Reward Feedback Learning (ReFL), a direct tuning algorithm to optimize diffusion models against a scorer. Both automatic and human evaluation support ReFL's advantages over compared methods. All code and datasets are provided at \url{https://github.com/THUDM/ImageReward}.

Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, Yuxiao Dong• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO 2014 (val)--
137
Text-to-Image GenerationGenEval
GenEval Score0.87
88
Perceptual Quality AssessmentHPE-Bench 1.0 (test)
SRCC0.4304
66
Perceptual Quality AssessmentTIEdit 1.0 (test)
SRCC0.0453
40
Editing Alignment AssessmentHPE-Bench 1.0 (test)
SRCC0.3079
33
3D Appearance Preference EvaluationHuman Preference Evaluation Dataset Appearance
Accuracy73
30
Text-to-Image AlignmentPick-a-Pic v2
Image Reward1.0119
27
Text-to-Image GenerationHPD v2 (test)
HPSv234.95
25
Text-to-Image GenerationHPD
PickScore22.66
22
Text-to-Image GenerationDrawBench
HPSV2.131.08
19
Showing 10 of 59 rows

Other info

Code

Follow for update