Guiding Perception-Reasoning Closer to Human in Blind Image Quality Assessment
About
Humans assess image quality through a perception-reasoning cascade, integrating sensory cues with implicit reasoning to form self-consistent judgments. In this work, we investigate how a model can acquire both human-like and self-consistent reasoning capability for blind image quality assessment (BIQA). We first collect human evaluation data that capture several aspects of human perception-reasoning pipeline. Then, we adopt reinforcement learning, using human annotations as reward signals to guide the model toward human-like perception and reasoning. To enable the model to internalize self-consistent reasoning capability, we design a reward that drives the model to infer the image quality purely from self-generated descriptions. Empirically, our approach achieves score prediction performance comparable to state-of-the-art BIQA systems under general metrics, including Pearson and Spearman correlation coefficients. In addition to the rating score, we assess human-model alignment using ROUGE-1 to measure the similarity between model-generated and human perception-reasoning chains. On over 1,000 human-annotated samples, our model reaches a ROUGE-1 score of 0.512 (cf. 0.443 for baseline), indicating substantial coverage of human explanations and marking a step toward human-like interpretable reasoning in BIQA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Quality Assessment | CSIQ (test) | SRCC0.823 | 103 | |
| Image Quality Assessment | SPAQ (test) | SRCC0.907 | 77 | |
| No-Reference Image Quality Assessment | KADID (test) | SROCC0.734 | 42 | |
| Image Quality Assessment | KonIQ (test) | SROCC0.92 | 38 | |
| Blind Image Quality Assessment | LIVE-W (test) | PLCC0.877 | 34 | |
| Blind Image Quality Assessment | AGIQA (test) | PLCC0.803 | 34 | |
| Human Consistency Evaluation | Q-Reasoning (test) | ROUGE-1 Score51.4 | 6 |