Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language Models

About

Large vision-language models (LVLMs) have shown premise in a broad range of vision-language tasks with their strong reasoning and generalization capabilities. However, they require considerable computational resources for training and deployment. This study aims to bridge the performance gap between traditional-scale LVLMs and resource-friendly lite versions by adopting high-quality training data. To this end, we propose a comprehensive pipeline for generating a synthetic dataset. The key idea is to leverage strong proprietary models to generate (i) fine-grained image annotations for vision-language alignment and (ii) complex reasoning visual question-answering pairs for visual instruction fine-tuning, yielding 1.3M samples in total. We train a series of lite VLMs on the synthetic dataset and experimental results demonstrate the effectiveness of the proposed scheme, where they achieve competitive performance on 17 benchmarks among 4B LVLMs, and even perform on par with 7B/13B-scale models on various benchmarks. This work highlights the feasibility of adopting high-quality data in crafting more efficient LVLMs. We name our dataset \textit{ALLaVA}, and open-source it to research community for developing better resource-efficient LVLMs for wider usage.

Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, Benyou Wang• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Multimodal EvaluationMME
Score1.62e+3
658
Multimodal UnderstandingMMBench--
637
Multimodal UnderstandingMM-Vet
MM-Vet Score32.2
531
Science Question AnsweringScienceQA--
502
Multimodal UnderstandingMMMU
Accuracy35.3
437
Multimodal ReasoningMM-Vet
MM-Vet Score38.6
431
Multimodal Perception and CognitionMME
Overall Score1.62e+3
182
Multimodal UnderstandingMMBench CN
Accuracy52.2
174
Multimodal ReasoningMMBench--
78
Showing 10 of 14 rows

Other info

Follow for update