Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation

About

We present a comprehensive solution to learn and improve text-to-image models from human preference feedback. To begin with, we build ImageReward -- the first general-purpose text-to-image human preference reward model -- to effectively encode human preferences. Its training is based on our systematic annotation pipeline including rating and ranking, which collects 137k expert comparisons to date. In human evaluation, ImageReward outperforms existing scoring models and metrics, making it a promising automatic metric for evaluating text-to-image synthesis. On top of it, we propose Reward Feedback Learning (ReFL), a direct tuning algorithm to optimize diffusion models against a scorer. Both automatic and human evaluation support ReFL's advantages over compared methods. All code and datasets are provided at \url{https://github.com/THUDM/ImageReward}.

Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, Yuxiao Dong• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO 2014 (val)--
128
Perceptual Quality AssessmentHPE-Bench 1.0 (test)
SRCC0.4304
66
Editing Alignment AssessmentHPE-Bench 1.0 (test)
SRCC0.3079
33
3D Appearance Preference EvaluationHuman Preference Evaluation Dataset Appearance
Accuracy73
30
Human Preference EvaluationImageReward (test)
Preference Accuracy0.6515
18
Human Preference EvaluationHPD v2 (test)
Preference Accuracy73.95
18
3D Text-Fidelity Preference EvaluationOOD Preference Evaluation Dataset Text-Fidelity
Accuracy85
15
3D Surface Preference EvaluationOOD Preference Evaluation Dataset Surface
Accuracy54
15
3D Appearance Preference EvaluationSynthetic Preference Evaluation Dataset Appearance
Accuracy60
15
3D Surface Preference EvaluationSynthetic Preference Evaluation Dataset Surface
Accuracy70
15
Showing 10 of 30 rows

Other info

Code

Follow for update