Multi-modal, Multi-task, Multi-criteria Automatic Evaluation with Vision Language Models
About
Vision-language models (VLMs) have shown impressive abilities across a range of multi-modal tasks. However, existing metrics for evaluating the quality of text generated by VLMs typically focus on an overall evaluation for a specific task, such as image captioning. While the overall evaluation is essential for any task, the criteria prioritized can differ depending on the task, making it challenging for current metrics to adapt to multi-task scenarios. To address this limitation, we propose HarmonicEval, a reference-free comprehensive evaluation metric that aggregates criterion-wise scores to produce the overall score in a bottom-up manner. Furthermore, to assess the generalizability of automatic evaluation metrics in multi-task scenarios, we construct the Multi-task Multi-criteria Human Evaluation (MMHE) benchmark, which comprises 18,000 expert human judgments across four multi-modal tasks. Our experiments demonstrate that HarmonicEval achieves higher correlations with human judgments than conventional metrics while providing numerical scores for each criterion. Project page: https://stjohn2007.github.io/MMHE_project/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Captioning Evaluation | Composite | Kendall-c Tau_c66.2 | 131 | |
| Image Captioning Evaluation | Flickr8K-CF | Kendall-b Correlation (tau_b)39.2 | 99 | |
| Image Captioning Evaluation | Pascal-50S | Accuracy82.4 | 44 | |
| Image Captioning | Flickr8k-EX | Tau-c0.531 | 22 | |
| Hallucination Evaluation | MMHE | REG66.6 | 11 | |
| Image Captioning Evaluation | FOIL | Accuracy97.8 | 10 | |
| Image Captioning | MMHE User Study | Human Preference Count19 | 2 | |
| Referring expression generation | MMHE User Study | Human Preference Count19 | 2 | |
| Visual Document Understanding | MMHE User Study | Human Preference Count21 | 2 | |
| Visual Question Answering | MMHE User Study | Human Preference Count12 | 2 |