Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CLAIR: Evaluating Image Captions with Large Language Models

About

The evaluation of machine-generated image captions poses an interesting yet persistent challenge. Effective evaluation measures must consider numerous dimensions of similarity, including semantic relevance, visual structure, object interactions, caption diversity, and specificity. Existing highly-engineered measures attempt to capture specific aspects, but fall short in providing a holistic score that aligns closely with human judgments. Here, we propose CLAIR, a novel method that leverages the zero-shot language modeling capabilities of large language models (LLMs) to evaluate candidate captions. In our evaluations, CLAIR demonstrates a stronger correlation with human judgments of caption quality compared to existing measures. Notably, on Flickr8K-Expert, CLAIR achieves relative correlation improvements over SPICE of 39.6% and over image-augmented methods such as RefCLIP-S of 18.3%. Moreover, CLAIR provides noisily interpretable results by allowing the language model to identify the underlying reasoning behind its assigned score. Code is available at https://davidmchan.github.io/clair/

David Chan, Suzanne Petryk, Joseph E. Gonzalez, Trevor Darrell, John Canny• 2023

Related benchmarks

TaskDatasetResultRank
Image Captioning EvaluationComposite
Kendall-c Tau_c61
92
Image Captioning EvaluationFlickr8K Expert (test)
Kendall tau_c48.3
76
Image Captioning EvaluationFlickr8k Expert
Kendall Tau-c (tau_c)48.8
73
Image Captioning EvaluationPascal-50S (test)
HC52.4
66
Image Captioning EvaluationFlickr8K-CF
Kendall-b Correlation (tau_b)38.2
62
Image Captioning EvaluationPascal-50S
Mean Score78.7
39
Hallucination DetectionFOIL
Accuracy (4 Refs)93.6
32
Image Captioning EvaluationCOMPOSITE (COM) (test)
Kendall's tau-c61
17
Object Hallucination DetectionFOIL (test)
Accuracy93.6
9
Showing 9 of 9 rows

Other info

Follow for update