ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation
About
We present a comprehensive solution to learn and improve text-to-image models from human preference feedback. To begin with, we build ImageReward -- the first general-purpose text-to-image human preference reward model -- to effectively encode human preferences. Its training is based on our systematic annotation pipeline including rating and ranking, which collects 137k expert comparisons to date. In human evaluation, ImageReward outperforms existing scoring models and metrics, making it a promising automatic metric for evaluating text-to-image synthesis. On top of it, we propose Reward Feedback Learning (ReFL), a direct tuning algorithm to optimize diffusion models against a scorer. Both automatic and human evaluation support ReFL's advantages over compared methods. All code and datasets are provided at \url{https://github.com/THUDM/ImageReward}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | MS-COCO 2014 (val) | -- | 128 | |
| Perceptual Quality Assessment | HPE-Bench 1.0 (test) | SRCC0.4304 | 66 | |
| Editing Alignment Assessment | HPE-Bench 1.0 (test) | SRCC0.3079 | 33 | |
| 3D Appearance Preference Evaluation | Human Preference Evaluation Dataset Appearance | Accuracy73 | 30 | |
| Human Preference Evaluation | ImageReward (test) | Preference Accuracy0.6515 | 18 | |
| Human Preference Evaluation | HPD v2 (test) | Preference Accuracy73.95 | 18 | |
| 3D Text-Fidelity Preference Evaluation | OOD Preference Evaluation Dataset Text-Fidelity | Accuracy85 | 15 | |
| 3D Surface Preference Evaluation | OOD Preference Evaluation Dataset Surface | Accuracy54 | 15 | |
| 3D Appearance Preference Evaluation | Synthetic Preference Evaluation Dataset Appearance | Accuracy60 | 15 | |
| 3D Surface Preference Evaluation | Synthetic Preference Evaluation Dataset Surface | Accuracy70 | 15 |