Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels

About

The explosion of visual content available online underscores the requirement for an accurate machine assessor to robustly evaluate scores across diverse types of visual contents. While recent studies have demonstrated the exceptional potentials of large multi-modality models (LMMs) on a wide range of related fields, in this work, we explore how to teach them for visual rating aligned with human opinions. Observing that human raters only learn and judge discrete text-defined levels in subjective studies, we propose to emulate this subjective process and teach LMMs with text-defined rating levels instead of scores. The proposed Q-Align achieves state-of-the-art performance on image quality assessment (IQA), image aesthetic assessment (IAA), as well as video quality assessment (VQA) tasks under the original LMM structure. With the syllabus, we further unify the three tasks into one model, termed the OneAlign. In our experiments, we demonstrate the advantage of the discrete-level-based syllabus over direct-score-based variants for LMMs. Our code and the pre-trained weights are released at https://github.com/Q-Future/Q-Align.

Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtao Zhai, Weisi Lin• 2023

Related benchmarks

TaskDatasetResultRank
Image Quality AssessmentSPAQ
SRCC0.887
191
Image Quality AssessmentCSIQ
SRC0.7419
138
Video Quality AssessmentKoNViD-1k
SROCC0.895
134
Image Quality AssessmentAGIQA-3K
SRCC0.852
112
Image Quality AssessmentCSIQ (test)
SRCC0.737
103
Image Quality AssessmentKonIQ-10k
SRCC0.941
96
Image Quality AssessmentLIVE
SRC0.8984
96
Image Quality AssessmentKADID
SRCC68.4
95
Image Quality AssessmentPIPAL
SRCC41.9
95
Image Quality AssessmentKonIQ-10k (test)
SRCC0.94
91
Showing 10 of 61 rows

Other info

Code

Follow for update