Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vision Language Model-based Caption Evaluation Method Leveraging Visual Context Extraction

About

Given the accelerating progress of vision and language modeling, accurate evaluation of machine-generated image captions remains critical. In order to evaluate captions more closely to human preferences, metrics need to discriminate between captions of varying quality and content. However, conventional metrics fail short of comparing beyond superficial matches of words or embedding similarities; thus, they still need improvement. This paper presents VisCE$^2$, a vision language model-based caption evaluation method. Our method focuses on visual context, which refers to the detailed content of images, including objects, attributes, and relationships. By extracting and organizing them into a structured format, we replace the human-written references with visual contexts and help VLMs better understand the image, enhancing evaluation performance. Through meta-evaluation on multiple datasets, we validated that VisCE$^2$ outperforms the conventional pre-trained metrics in capturing caption quality and demonstrates superior consistency with human judgment.

Koki Maeda, Shuhei Kurita, Taiki Miyanishi, Naoaki Okazaki• 2024

Related benchmarks

TaskDatasetResultRank
Image Captioning EvaluationComposite
Kendall-c Tau_c57.7
131
Image Captioning EvaluationFlickr8K-CF
Kendall-b Correlation (tau_b)35.4
99
Image Captioning EvaluationPascal-50S
Accuracy81.4
44
Image CaptioningFlickr8k-EX
Tau-c0.526
22
Showing 4 of 4 rows

Other info

Follow for update