SPICE: Semantic Propositional Image Caption Evaluation
About
There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as `which caption-generator best understands colors?' and `can caption-generators count?'
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Captioning Evaluation | Composite | Kendall-c Tau_c40.3 | 92 | |
| Image Captioning Evaluation | Flickr8K Expert (test) | Kendall tau_c44.9 | 76 | |
| Image Captioning Evaluation | Flickr8k Expert | Kendall Tau-c (tau_c)44.9 | 73 | |
| Image Captioning Evaluation | Pascal-50S (test) | HC63.6 | 66 | |
| Image Captioning Evaluation | Flickr8K-CF (test) | Kendall tau_b24.4 | 65 | |
| Image Captioning Evaluation | Flickr8K-CF | Kendall-b Correlation (tau_b)24.4 | 62 | |
| Image Captioning Evaluation | Pascal-50S | Mean Score78.7 | 39 | |
| Hallucination Detection | FOIL | Accuracy (4 Refs)86.1 | 32 | |
| Image Captioning Hallucination Detection | FOIL (test) | Accuracy86.1 | 28 | |
| Correlation with human judgment | Flickr8K-CF | Tau B24.4 | 26 |