Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Selective Training for Large Vision Language Models via Visual Information Gain

About

Large Vision Language Models (LVLMs) have achieved remarkable progress, yet they often suffer from language bias, producing answers without relying on visual evidence. While prior work attempts to mitigate this issue through decoding strategies, architectural modifications, or curated instruction data, they typically lack a quantitative measure of how much individual training samples or tokens actually benefit from the image. In this work, we introduce Visual Information Gain (VIG), a perplexity-based metric that measures the reduction in prediction uncertainty provided by visual input. VIG enables fine-grained analysis at both sample and token levels, effectively highlighting visually grounded elements such as colors, spatial relations, and attributes. Leveraging this, we propose a VIG-guided selective training scheme that prioritizes high-VIG samples and tokens. This approach improves visual grounding and mitigates language bias, achieving superior performance with significantly reduced supervision by focusing exclusively on visually informative samples and tokens.

Seulbi Lee, Sangheum Hwang• 2026

Related benchmarks

TaskDatasetResultRank
Hallucination EvaluationMMHal-Bench
MMHal Score2.71
174
Hallucination EvaluationCHAIR
CHAIR_s47
166
Hallucination EvaluationPOPE
Accuracy87.5
132
Vision UnderstandingMMBench
Accuracy67.89
104
Visual UnderstandingMM-Vet
MM-Vet Score37.01
102
Document Visual Question AnsweringDocVQA
Accuracy23.22
81
Document Visual Question AnsweringDocVQA v1.0 (test)--
49
Vision UnderstandingLLaVA-W
Score63
10
Hallucination EvaluationPOPE v1.0 (test)
F1 Score87.15
6
Hallucination EvaluationMMHal v1.0 (test)
Score2.23
6
Showing 10 of 14 rows

Other info

Follow for update