Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SPICE: Semantic Propositional Image Caption Evaluation

About

There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as `which caption-generator best understands colors?' and `can caption-generators count?'

Peter Anderson, Basura Fernando, Mark Johnson, Stephen Gould• 2016

Related benchmarks

TaskDatasetResultRank
Image Captioning EvaluationComposite
Kendall-c Tau_c40.3
131
Image Captioning EvaluationFlickr8K-CF
Kendall-b Correlation (tau_b)51.7
99
Image Captioning EvaluationFlickr8k Expert
Kendall Tau-c (tau_c)44.9
82
Image Captioning EvaluationFlickr8K Expert (test)
Kendall tau_c44.9
76
Image Captioning EvaluationPascal-50S (test)
HC63.6
66
Image Captioning EvaluationFlickr8K-CF (test)
Kendall tau_b24.4
65
Image Captioning EvaluationPascal-50S
Accuracy78.7
44
Hallucination DetectionFOIL
Accuracy (4 Refs)86.1
32
Image Captioning EvaluationNebula
Kendall tau_c47.4
31
Image Captioning Hallucination DetectionFOIL (test)
Accuracy86.1
28
Showing 10 of 29 rows

Other info

Follow for update