Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning

About

Multi-modal large language models (MLLMs) have made significant strides in various visual understanding tasks. However, the majority of these models are constrained to process low-resolution images, which limits their effectiveness in perception tasks that necessitate detailed visual information. In our study, we present MG-LLaVA, an innovative MLLM that enhances the model's visual processing capabilities by incorporating a multi-granularity vision flow, which includes low-resolution, high-resolution, and object-centric features. We propose the integration of an additional high-resolution visual encoder to capture fine-grained details, which are then fused with base visual features through a Conv-Gate fusion network. To further refine the model's object recognition abilities, we incorporate object-level features derived from bounding boxes identified by offline detectors. Being trained solely on publicly available multimodal data through instruction tuning, MG-LLaVA demonstrates exceptional perception skills. We instantiate MG-LLaVA with a wide variety of language encoders, ranging from 3.8B to 34B, to evaluate the model's performance comprehensively. Extensive evaluations across multiple benchmarks demonstrate that MG-LLaVA outperforms existing MLLMs of comparable parameter sizes, showcasing its remarkable efficacy. The code will be available at https://github.com/PhoenixZ810/MG-LLaVA.

Xiangyu Zhao, Xiangtai Li, Haodong Duan, Haian Huang, Yining Li, Kai Chen, Hua Yang• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy70
1117
Visual Question AnsweringVizWiz
Accuracy60
1043
Visual Question AnsweringGQA
Accuracy62.7
963
Multimodal EvaluationMME--
557
Video Question AnsweringMSRVTT-QA
Accuracy59.8
481
Multimodal UnderstandingMM-Vet
MM-Vet Score41
418
Video Question AnsweringMSVD-QA
Accuracy71.5
340
Visual Question AnsweringTextVQA (val)
VQA Score67.3
309
Multimodal UnderstandingMMMU
Accuracy35.3
275
Visual Question AnsweringChartQA
Accuracy40.8
239
Showing 10 of 23 rows

Other info

Code

Follow for update