Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Treat Visual Tokens as Text? But Your MLLM Only Needs Fewer Efforts to See

About

By treating visual tokens from visual encoders as text tokens, Multimodal Large Language Models (MLLMs) have achieved remarkable progress across diverse visual understanding tasks, leveraging the robust architectures of Large Language Models (LLMs). However, as token counts grow, the quadratic scaling of computation in LLMs introduces a significant efficiency bottleneck, impeding further scalability. Although recent approaches have explored pruning visual tokens or employing lighter LLM architectures, the computational overhead from an increasing number of visual tokens remains a substantial challenge. In this study, we investigate the redundancy in visual computation at both the parameter and computational pattern levels within LLaVA, a representative MLLM, and introduce a suite of streamlined strategies to enhance efficiency. These include neighbor-aware visual token attention, pruning of inactive visual attention heads, and selective layer dropping for visual computations. By implementing these strategies in LLaVA, we achieve a reduction in computational demands of 88% while maintaining model performance across key benchmarks. Additionally, we validate the existence of visual computational redundancy in other MLLMs, such as Qwen2-VL-7B and InternVL-2.0-4B/8B/26B. These results present a novel pathway for MLLMs to handle dense visual tokens with minimal computational costs. Code and model checkpoints will be released to support further research.

Zeliang Zhang, Phu Pham, Wentian Zhao, Kun Wan, Yu-Jhe Li, Jianing Zhou, Daniel Miranda, Ajinkya Kale, Chenliang Xu• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy77.4
1165
Object Hallucination EvaluationPOPE
Accuracy86.6
935
Text-based Visual Question AnsweringTextVQA
Accuracy55.2
496
Visual Question AnsweringGQA
Accuracy60.7
374
Multimodal UnderstandingMMBench CN
Accuracy57
162
Science Question AnsweringScienceQA SQA-IMG
Accuracy68
114
Multimodal UnderstandingMMBench (MMB)
Accuracy64.6
69
Multimodal PerceptionMME Perception
Perception Score1.42e+3
61
Multimodal UnderstandingSEED-I Image
Accuracy0.646
40
Visual PerceptionMME Perception
MME^P1.43e+3
27
Showing 10 of 10 rows

Other info

Follow for update