Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Object Hallucination in Image Captioning

About

Despite continuously improving performance, contemporary image captioning models are prone to "hallucinating" objects that are not actually in a scene. One problem is that standard metrics only measure similarity to ground truth captions and may not fully capture image relevance. In this work, we propose a new image relevance metric to evaluate current models with veridical visual labels and assess their rate of object hallucination. We analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination. We investigate these questions on the standard image captioning benchmark, MSCOCO, using a diverse set of models. Our analysis yields several interesting findings, including that models which score best on standard sentence metrics do not always have lower hallucination and that models which hallucinate more tend to make errors driven by language priors.

Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, Kate Saenko• 2018

Related benchmarks

TaskDatasetResultRank
Token-level hallucination detectionHalLoc Instruct
Object Precision15
7
Token-level hallucination detectionHalLoc Caption
Object Precision3
7
Token-level hallucination detectionHalLoc VQA
Object Precision27
7
Foil DetectionFOIL nocaps (Overall)
FDR58.3
6
Foil DetectionFOIL-it (test)
FDR20.2
6
Foil DetectionFOIL nocaps (In Domain)
FDR57.8
6
Foil DetectionFOIL-nocaps Near Domain
FDR59.1
6
Foil DetectionFOIL-nocaps (Out of Domain)
FDR58.1
6
Showing 8 of 8 rows

Other info

Follow for update