HPSv3: Towards Wide-Spectrum Human Preference Score
About
Evaluating text-to-image generation models requires alignment with human perception, yet existing human-centric metrics are constrained by limited data coverage, suboptimal feature extraction, and inefficient loss functions. To address these challenges, we introduce Human Preference Score v3 (HPSv3). (1) We release HPDv3, the first wide-spectrum human preference dataset integrating 1.08M text-image pairs and 1.17M annotated pairwise comparisons from state-of-the-art generative models and low to high-quality real-world images. (2) We introduce a VLM-based preference model trained using an uncertainty-aware ranking loss for fine-grained ranking. Besides, we propose Chain-of-Human-Preference (CoHP), an iterative image refinement method that enhances quality without extra data, using HPSv3 to select the best image at each step. Extensive experiments demonstrate that HPSv3 serves as a robust metric for wide-spectrum image evaluation, and CoHP offers an efficient and human-aligned approach to improve image generation quality. The code and dataset are available at the HPSv3 Homepage.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Human Preference Evaluation | HPD v2 (test) | Preference Accuracy85.36 | 18 | |
| Human Preference Evaluation | ImageReward (test) | Preference Accuracy0.6703 | 18 | |
| Pairwise Preference | HPD v3 (test) | Accuracy76.03 | 11 | |
| Pairwise Preference | GenAI Bench (test) | Accuracy70.95 | 11 | |
| Image Generation Assessment | GenAI-Bench Image (test) | Accuracy70.9 | 8 | |
| Image Generation Assessment | MMRB2 (test) | Accuracy58.5 | 8 | |
| Text-to-Image Generation | T2I-CompBench out-of-domain | Semantic Consistency46.46 | 7 | |
| Text-to-Image Generation | GenEval out-of-domain | Semantic Consistency61.14 | 7 | |
| Text-to-Image Generation | Out-of-Domain Evaluation Set | CLIP Score34.12 | 7 | |
| Semantic Consistency | UniGenBench In-domain v1 | Overall Score57.98 | 7 |