Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EVLM: An Efficient Vision-Language Model for Visual Understanding

About

In the field of multi-modal language models, the majority of methods are built on an architecture similar to LLaVA. These models use a single-layer ViT feature as a visual prompt, directly feeding it into the language models alongside textual tokens. However, when dealing with long sequences of visual signals or inputs such as videos, the self-attention mechanism of language models can lead to significant computational overhead. Additionally, using single-layer ViT features makes it challenging for large language models to perceive visual signals fully. This paper proposes an efficient multi-modal language model to minimize computational costs while enabling the model to perceive visual signals as comprehensively as possible. Our method primarily includes: (1) employing cross-attention to image-text interaction similar to Flamingo. (2) utilize hierarchical ViT features. (3) introduce the Mixture of Experts (MoE) mechanism to enhance model effectiveness. Our model achieves competitive scores on public multi-modal benchmarks and performs well in tasks such as image captioning and video captioning.

Kaibing Chen, Dong Shen, Hanwen Zhong, Huasong Zhong, Kui Xia, Di Xu, Wei Yuan, Yifei Hu, Bin Wen, Tianke Zhang, Changyi Liu, Dewen Fan, Huihui Xiao, Jiahong Wu, Fan Yang, Size Li, Di Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy89.7
1455
Visual Question AnsweringTextVQA
Accuracy67.5
1285
Visual Question AnsweringGQA
Accuracy64.4
1249
Diagram UnderstandingAI2D
Accuracy76
247
Visual Question AnsweringVQAv2
Accuracy81.9
177
Multimodal UnderstandingMMBench CN--
174
Multi-modal UnderstandingMMBench EN
Overall Score76.9
55
Visual Question AnsweringVizWizQA
Accuracy47.3
37
Showing 8 of 8 rows

Other info

Follow for update