Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Honeybee: Locality-enhanced Projector for Multimodal LLM

About

In Multimodal Large Language Models (MLLMs), a visual projector plays a crucial role in bridging pre-trained vision encoders with LLMs, enabling profound visual understanding while harnessing the LLMs' robust capabilities. Despite the importance of the visual projector, it has been relatively less explored. In this study, we first identify two essential projector properties: (i) flexibility in managing the number of visual tokens, crucial for MLLMs' overall efficiency, and (ii) preservation of local context from visual features, vital for spatial understanding. Based on these findings, we propose a novel projector design that is both flexible and locality-enhanced, effectively satisfying the two desirable properties. Additionally, we present comprehensive strategies to effectively utilize multiple and multifaceted instruction datasets. Through extensive experiments, we examine the impact of individual design choices. Finally, our proposed MLLM, Honeybee, remarkably outperforms previous state-of-the-art methods across various benchmarks, including MME, MMBench, SEED-Bench, and LLaVA-Bench, achieving significantly higher efficiency. Code and models are available at https://github.com/kakaobrain/honeybee.

Junbum Cha, Wooyoung Kang, Jonghwan Mun, Byungseok Roh• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy74.6
1165
Visual Question AnsweringVizWiz
Accuracy49.2
1043
Visual Question AnsweringGQA
Accuracy59.2
963
Object Hallucination EvaluationPOPE
Accuracy85.6
935
Multimodal EvaluationMME--
557
Multimodal UnderstandingMMBench--
367
Multimodal ReasoningMM-Vet
MM-Vet Score42.2
281
Multimodal UnderstandingMMMU
Accuracy37.3
275
Science Question AnsweringScienceQA (test)
Average Accuracy94.39
208
Multimodal UnderstandingSEED-Bench--
203
Showing 10 of 31 rows

Other info

Code

Follow for update