Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Human-Aligned MLLM Judges for Fine-Grained Image Editing Evaluation: A Benchmark, Framework, and Analysis

About

Evaluating image editing models remains challenging due to the coarse granularity and limited interpretability of traditional metrics, which often fail to capture aspects important to human perception and intent. Such metrics frequently reward visually plausible outputs while overlooking controllability, edit localization, and faithfulness to user instructions. In this work, we introduce a fine-grained Multimodal Large Language Model (MLLM)-as-a-Judge framework for image editing that decomposes common evaluation notions into twelve fine-grained interpretable factors spanning image preservation, edit quality, and instruction fidelity. Building on this formulation, we present a new human-validated benchmark that integrates human judgments, MLLM-based evaluations, model outputs, and traditional metrics across diverse image editing tasks. Through extensive human studies, we show that the proposed MLLM judges align closely with human evaluations at a fine granularity, supporting their use as reliable and scalable evaluators. We further demonstrate that traditional image editing metrics are often poor proxies for these factors, failing to distinguish over-edited or semantically imprecise outputs, whereas our judges provide more intuitive and informative assessments in both offline and online settings. Together, this work introduces a benchmark, a principled factorization, and empirical evidence positioning fine-grained MLLM judges as a practical foundation for studying, comparing, and improving image editing approaches.

Runzhou Liu, Hailey Weingord, Sejal Mittal, Prakhar Dungarwal, Anusha Nandula, Bo Ni, Samyadeep Basu, Hongjie Chen, Nesreen K. Ahmed, Li Li, Jiayi Zhang, Koustava Goswami, Subhojyoti Mukherjee, Branislav Kveton, Puneet Mathur, Franck Dernoncourt, Yue Zhao, Yu Wang, Ryan A. Rossi, Zhengzhong Tu, Hongru Du (1) __INSTITUTION_21__ University of Virginia, (2) Columbia University, (3) Vanderbilt University, (4) Adobe Research, (5) Dolby Laboratories, (6) Cisco Research, (7) University of Southern California, (8) University of Wisconsin-Madison, (9) University of Oregon, (10) Texas A&M University)• 2026

Related benchmarks

TaskDatasetResultRank
Instruction-guided image editingHumanEdit Relation
Overall Average Score6.153
3
Image EditingImage Editing Dataset Add
Unchanged Regions Score6.444
2
Image EditingImage Editing Dataset Remove
Unchanged Regions5.824
2
Image EditingImage Editing Dataset Replace
Unchanged Regions Preservation6.222
2
Image EditingImage Editing Dataset (Action)
Unchanged Regions5.826
2
Image EditingImage Editing Dataset (Counting)
Unchanged Regions4.4
2
Image EditingImage Editing Dataset (Relation)
Unchanged Regions Score5.833
2
Image EditingImage Editing Dataset (All Edits)
Unchanged Regions5.758
2
Instruction-guided image editingHumanEdit Add
Image Preservation6.555
2
Instruction-guided image editingHumanEdit Remove
Image Preservation Score5.971
2
Showing 10 of 17 rows

Other info

Follow for update