Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement

About

Large vision-language models (LVLMs) have achieved impressive results in visual question-answering and reasoning tasks through vision instruction tuning on specific datasets. However, there remains significant room for improvement in aligning visual and language modalities. Existing methods often depend on external models or data, leading to uncontrollable and unstable alignment results. In this paper, we propose SIMA, a self-improvement framework that enhances visual and language modality alignment without external dependencies. SIMA leverages existing vision instruction tuning datasets to self-generate responses, incorporating an in-context self-critic mechanism that constructs preference pairs for tuning. Crucially, our approach allows LVLMs to act as critics by designing effective critic prompts, eliminating the need for additional fine-tuning with external instruction data. We introduce three novel visual metrics within the self-critic process to guide judgment, significantly improving the accuracy of self-critic. Through extensive experiments across 14 hallucination and comprehensive benchmarks, we demonstrate that SIMA significantly improves LVLM's performance and outperforms previous approaches, achieving superior modality alignment.

Xiyao Wang, Jiuhai Chen, Zhaoyang Wang, Yuhang Zhou, Yiyang Zhou, Huaxiu Yao, Tianyi Zhou, Tom Goldstein, Parminder Bhatia, Furong Huang, Cao Xiao• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy62.1
1525
Visual Question AnsweringTextVQA--
1285
Text-based Visual Question AnsweringTextVQA
Accuracy66.1
807
Multimodal EvaluationMME--
658
Multimodal UnderstandingMMBench
Accuracy71.04
637
Science Question AnsweringScienceQA
Accuracy72.5
502
Multimodal UnderstandingMMMU
Accuracy35.14
437
Multimodal Capability EvaluationMM-Vet
Score38.4
345
Multimodal UnderstandingSEED-Bench
Accuracy64.68
343
Multimodal UnderstandingMMStar
Accuracy32.4
324
Showing 10 of 24 rows

Other info

Follow for update