Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VIEScore: Towards Explainable Metrics for Conditional Image Synthesis Evaluation

About

In the rapidly advancing field of conditional image generation research, challenges such as limited explainability lie in effectively evaluating the performance and capabilities of various models. This paper introduces VIEScore, a Visual Instruction-guided Explainable metric for evaluating any conditional image generation tasks. VIEScore leverages general knowledge from Multimodal Large Language Models (MLLMs) as the backbone and does not require training or fine-tuning. We evaluate VIEScore on seven prominent tasks in conditional image tasks and found: (1) VIEScore (GPT4-o) achieves a high Spearman correlation of 0.4 with human evaluations, while the human-to-human correlation is 0.45. (2) VIEScore (with open-source MLLM) is significantly weaker than GPT-4o and GPT-4v in evaluating synthetic images. (3) VIEScore achieves a correlation on par with human ratings in the generation tasks but struggles in editing tasks. With these results, we believe VIEScore shows its great potential to replace human judges in evaluating image synthesis tasks.

Max Ku, Dongfu Jiang, Cong Wei, Xiang Yue, Wenhu Chen• 2023

Related benchmarks

TaskDatasetResultRank
element-level text-to-image alignment evaluationRichHF
SRCC65.8
17
element-level text-to-image alignment evaluationMHaluBench
SRCC67.8
17
element-level text-to-image alignment evaluationEvalMuse-40K
SRCC65.3
17
element-level text-to-image alignment evaluationGenAI-Bench
SRCC0.692
17
Conditional Image Generation, Editing, and CompositionImagenHub (test)
Subject-driven IG Score0.4806
9
Showing 5 of 5 rows

Other info

Follow for update