Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VisionZip: Longer is Better but Not Necessary in Vision Language Models

About

Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .

Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, Jiaya Jia• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy98.75
1525
Object Hallucination EvaluationPOPE
Accuracy87.6
1455
Visual Question AnsweringVQA v2
Accuracy79.7
1362
Visual Question AnsweringTextVQA
Accuracy62
1285
Visual Question AnsweringGQA
Accuracy62.4
1249
Automatic Speech RecognitionLibriSpeech clean (test)
WER5.93
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER7.71
1151
Text-based Visual Question AnsweringTextVQA
Accuracy71.7
807
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy76.8
706
Multimodal EvaluationMME
Score2.27e+3
658
Showing 10 of 351 rows
...

Other info

Code

Follow for update