Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Calibrated Self-Rewarding Vision Language Models

About

Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the language model and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR enhances performance and reduces hallucinations across ten benchmarks and tasks, achieving substantial improvements over existing methods by 7.62%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-language models and the ability to incrementally improve performance through iterative fine-tuning. Our data and code are available at https://github.com/YiyangZhou/CSR.

Yiyang Zhou, Zhiyuan Fan, Dongjie Cheng, Sihan Yang, Zhaorun Chen, Chenhang Cui, Xiyao Wang, Yun Li, Linjun Zhang, Huaxiu Yao• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy56.8
1043
Visual Question AnsweringGQA--
963
Multimodal EvaluationMME--
557
Multimodal UnderstandingMMBench
Accuracy70.44
367
Multimodal ReasoningMM-Vet
MM-Vet Score37.8
281
Multimodal UnderstandingMMMU
Accuracy34.63
275
Science Question AnsweringScienceQA
Accuracy64.76
229
Multimodal UnderstandingSEED-Bench
Accuracy65.12
203
Multimodal UnderstandingMMStar
Accuracy32.59
197
Diagram Question AnsweringAI2D
AI2D Accuracy53.34
196
Showing 10 of 32 rows

Other info

Code

Follow for update