Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution

About

With the rapid advancement of Multi-modal Large Language Models (MLLMs), MLLM-based Image Quality Assessment (IQA) methods have shown promising performance in linguistic quality description. However, current methods still fall short in accurately scoring image quality. In this work, we aim to leverage MLLMs to regress accurate quality scores. A key challenge is that the quality score is inherently continuous, typically modeled as a Gaussian distribution, whereas MLLMs generate discrete token outputs. This mismatch necessitates score discretization. Previous approaches discretize the mean score into a one-hot label, resulting in information loss and failing to capture inter-image relationships. We propose a distribution-based approach that discretizes the score distribution into a soft label. This method preserves the characteristics of the score distribution, achieving high accuracy and maintaining inter-image relationships. Moreover, to address dataset variation, where different IQA datasets exhibit various distributions, we introduce a fidelity loss based on Thurstone's model. This loss captures intra-dataset relationships, facilitating co-training across multiple IQA datasets. With these designs, we develop the distribution-based Depicted image Quality Assessment model for Score regression (DeQA-Score). Experiments across multiple benchmarks show that DeQA-Score stably outperforms baselines in score regression. Also, DeQA-Score can predict the score distribution that closely aligns with human annotations. Codes and model weights have been released in https://depictqa.github.io/deqa-score/.

Zhiyuan You, Xin Cai, Jinjin Gu, Tianfan Xue, Chao Dong• 2025

Related benchmarks

TaskDatasetResultRank
Image Quality AssessmentSPAQ
SRCC0.934
191
Image Quality AssessmentCSIQ
SRC0.857
138
Image Quality AssessmentAGIQA-3K
SRCC0.745
112
Image Quality AssessmentCSIQ (test)
SRCC0.744
103
Image Quality AssessmentKonIQ-10k
SRCC0.953
96
Image Quality AssessmentPIPAL
SRCC69
95
Image Quality AssessmentKADID
SRCC68.7
95
Blind Image Quality AssessmentFLIVE
SRCC0.501
86
Image Quality AssessmentKonIQ
SRCC0.946
82
Image Quality AssessmentSPAQ (test)
SRCC0.896
77
Showing 10 of 20 rows

Other info

Follow for update