Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adaptive-VoCo: Complexity-Aware Visual Token Compression for Vision-Language Models

About

In recent years, large-scale vision-language models (VLMs) have demonstrated remarkable performance on multimodal understanding and reasoning tasks. However, handling high-dimensional visual features often incurs substantial computational and memory costs. VoCo-LLaMA alleviates this issue by compressing visual patch tokens into a few VoCo tokens, reducing computational overhead while preserving strong cross-modal alignment. Nevertheless, such approaches typically adopt a fixed compression rate, limiting their ability to adapt to varying levels of visual complexity. To address this limitation, we propose Adaptive-VoCo, a framework that augments VoCo-LLaMA with a lightweight predictor for adaptive compression. This predictor dynamically selects an optimal compression rate by quantifying an image's visual complexity using statistical cues from the vision encoder, such as patch token entropy and attention map variance. Furthermore, we introduce a joint loss function that integrates rate regularization with complexity alignment. This enables the model to balance inference efficiency with representational capacity, particularly in challenging scenarios. Experimental results show that our method consistently outperforms fixed-rate baselines across multiple multimodal tasks, highlighting the potential of adaptive visual compression for creating more efficient and robust VLMs.

Xiaoyang Guo, Keze Wang• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy81.4
1455
Visual Question AnsweringVQA v2
Accuracy72.3
1362
Multimodal UnderstandingMMBench
Accuracy60.7
637
Visual Question AnsweringGQA
Accuracy57.6
505
Multimodal UnderstandingMME
MME Score1.29e+3
207
Multimodal UnderstandingSEED
Accuracy50.2
183
Science Question AnsweringScienceQA SQA-I
Accuracy68.5
103
Showing 7 of 7 rows

Other info

Follow for update