Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification

About

Large Vision Language Models (LVLMs) have shown remarkable capabilities in multimodal tasks like visual question answering or image captioning. However, inconsistencies between the visual information and the generated text, a phenomenon referred to as hallucinations, remain an unsolved problem with regard to the trustworthiness of LVLMs. To address this problem, recent works proposed to incorporate computationally costly Large (Vision) Language Models in order to detect hallucinations on a sentence- or subsentence-level. In this work, we introduce MetaToken, a lightweight binary classifier to detect hallucinations on the token-level at negligible cost. Based on a statistical analysis, we reveal key factors of hallucinations in LVLMs. MetaToken can be applied to any open-source LVLM without any knowledge about ground truth data providing a calibrated detection of hallucinations. We evaluate our method on four state-of-the-art LVLMs demonstrating the effectiveness of our approach.

Laura Fieback, Jakob Spiegelberg, Hanno Gottschalk• 2024

Related benchmarks

TaskDatasetResultRank
Hallucination EvaluationPOPE--
153
Hallucination DetectionPOPE official (val)
A-PR94.84
34
Hallucination DetectionM-HalDetect (val)
A-ROC82.02
30
Hallucination DetectionAMBER sampled 5k
A-ROC75.59
30
Hallucination DetectionCOCO caption (val)
A-ROC70.89
30
Token-level hallucination detectionMS COCO image captioning (test)
Precision70
27
Object Hallucination DetectionAMBER out-of-distribution (OOD)
AUC0.8215
8
Object Hallucination DetectionMSCOCO LLaVA 1.5 (test)
AUC88.95
8
Object Hallucination DetectionMSCOCO Qwen3-VL 3 (test)
AUC86.29
8
Object Hallucination DetectionMSCOCO Average performance across VLMs (test)
AUC83.83
8
Showing 10 of 11 rows

Other info

Follow for update