Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

First Logit Boosting: Visual Grounding Method to Mitigate Object Hallucination in Large Vision-Language Models

About

Recent Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across various multimodal tasks that require understanding both visual and linguistic inputs. However, object hallucination -- the generation of nonexistent objects in answers -- remains a persistent challenge. Although several approaches such as retraining and external grounding methods have been proposed to mitigate this issue, they still suffer from high data costs or structural complexity. Training-free methods such as Contrastive Decoding (CD) are more cost-effective, avoiding additional training or external models, but still suffer from long-term decay, where visual grounding weakens and language priors dominate as the generation progresses. In this paper, we propose First Logit Boosting (FLB), a simple yet effective training-free technique designed to alleviate long-term decay in LVLMs. FLB stores the logit of the first generated token and adds it to subsequent token predictions, effectively mitigating long-term decay of visual information. We observe that FLB (1) sustains the visual information embedded in the first token throughout generation, and (2) suppresses hallucinated words through the stabilizing effect of the ``The'' token. Experimental results show that FLB significantly reduces object hallucination across various tasks, benchmarks, and backbone models. Notably, it causes negligible inference overhead, making it highly applicable to real-time multimodal systems. Code is available at https://github.com/jiwooha20/FLB

Jiwoo Ha, Jongwoo Baek, Jinhyun So• 2026

Related benchmarks

TaskDatasetResultRank
Hallucination EvaluationAMBER
CHAIR7.1
172
Object Hallucination MitigationCHAIR
CHAIRs Score52.5
22
Object Hallucination Mitigation on Generative TasksAMBER
CHAIR9
22
Multi-turn conversationConvBench
Win Rate (1st Turn)15.9
3
Object Hallucination AssessmentMMHalbench
Average Score2.23
3
Discriminative EvaluationPOPE (Random)
Accuracy84.6
3
Discriminative EvaluationPOPE Popular
Accuracy82.7
3
Discriminative EvaluationPOPE Adversarial
Accuracy80.1
3
Discriminative EvaluationMME
MME Score115.9
3
Object Hallucination EvaluationAMBER (test)
Accuracy7.28
2
Showing 10 of 10 rows

Other info

Follow for update