Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FocusLLaVA: A Coarse-to-Fine Approach for Efficient and Effective Visual Token Compression

About

Recent advances on Multi-modal Large Language Models have demonstrated that high-resolution image input is crucial for model capabilities, especially for fine-grained tasks. However, high-resolution images lead to a quadratic increase in the number of visual tokens input into LLMs, resulting in significant computational costs. Current work develop visual token compression methods to achieve efficiency improvements, often at the expense of performance. We argue that removing visual redundancy can simultaneously improve both efficiency and performance. We build a coarse-to-fine visual token compression method, with a vision-guided sampler for compressing redundant regions with low information density, and a text-guided sampler for selecting visual tokens that are strongly correlated with the user instructions.With these two modules, the proposed FocusLLaVA achieves improvements in both efficiency and performance. We validate the effectiveness of our approach on a wide range of evaluation datasets.

Yuke Zhu, Chi Xie, Shuang Liang, Bo Zheng, Sheng Guo• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy70
1117
Visual Question AnsweringGQA
Accuracy66
963
Object Hallucination EvaluationPOPE
Accuracy87.7
935
Multimodal EvaluationMME--
557
Multimodal Capability EvaluationMM-Vet
Score41.3
282
Multimodal Model EvaluationMMBench
Accuracy74.7
180
Multimodal EvaluationMMBench CN
Accuracy70.3
57
Question AnsweringScienceQA
Accuracy79
40
Multimodal EvaluationLLaVA-Bench In-the-Wild
Score65.6
36
Showing 9 of 9 rows

Other info

Follow for update