Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ALOHa: A New Measure for Hallucination in Captioning Models

About

Despite recent advances in multimodal pre-training for visual description, state-of-the-art models still produce captions containing errors, such as hallucinating objects not present in a scene. The existing prominent metric for object hallucination, CHAIR, is limited to a fixed set of MS COCO objects and synonyms. In this work, we propose a modernized open-vocabulary metric, ALOHa, which leverages large language models (LLMs) to measure object hallucinations. Specifically, we use an LLM to extract groundable objects from a candidate caption, measure their semantic similarity to reference objects from captions and object detections, and use Hungarian matching to produce a final hallucination score. We show that ALOHa correctly identifies 13.6% more hallucinated objects than CHAIR on HAT, a new gold-standard subset of MS COCO Captions annotated for hallucinations, and 30.8% more on nocaps, where objects extend beyond MS COCO categories. Our code is available at https://davidmchan.github.io/aloha/.

Suzanne Petryk, David M. Chan, Anish Kachinthaya, Haodi Zou, John Canny, Joseph E. Gonzalez, Trevor Darrell• 2024

Related benchmarks

TaskDatasetResultRank
Word-level multi-label classificationRich-HF (test)
Precision34.4
7
Foil DetectionFOIL-it (test)
FDR19.8
6
Foil DetectionFOIL nocaps (In Domain)
FDR71.8
6
Foil DetectionFOIL-nocaps Near Domain
FDR66.7
6
Foil DetectionFOIL-nocaps (Out of Domain)
FDR70.9
6
Foil DetectionFOIL nocaps (Overall)
FDR69.5
6
Showing 6 of 6 rows

Other info

Follow for update