Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CRoPS: A Training-Free Hallucination Mitigation Framework for Vision-Language Models

About

Despite the rapid success of Large Vision-Language Models (LVLMs), a persistent challenge is their tendency to generate hallucinated content, undermining reliability in real-world use. Existing training-free methods address hallucinations but face two limitations: (i) they rely on narrow assumptions about hallucination sources, and (ii) their effectiveness declines toward the end of generation, where hallucinations are most likely to occur. A common strategy is to build hallucinated models by completely or partially removing visual tokens and contrasting them with the original model. Yet, this alone proves insufficient, since visual information still propagates into generated text. Building on this insight, we propose a novel hallucinated model that captures hallucination effects by selectively removing key text tokens. We further introduce Generalized Contrastive Decoding, which integrates multiple hallucinated models to represent diverse hallucination sources. Together, these ideas form CRoPS, a training-free hallucination mitigation framework that improves CHAIR scores by 20% and achieves consistent gains across six benchmarks and three LVLM families, outperforming state-of-the-art training-free methods.

Neeraj Anand, Samyak Jha, Udbhav Bamba, Rahul Rahaman• 2026

Related benchmarks

TaskDatasetResultRank
Visual Mathematical ReasoningMathVista
Accuracy55.6
189
Hallucination assessmentAMBER
CHAIR_s7.2
47
Vision-language groundingMS-COCO 2014 (val)
CS39.5
32
Multimodal PerceptionMME
MME Score2.18e+3
24
VQA Hallucination DetectionPOPE Average of Random, Popular, and Adversarial 2023
Accuracy89.4
24
Showing 5 of 5 rows

Other info

Follow for update