Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adaptive-VoCo: Complexity-Aware Visual Token Compression for Vision-Language Models

About

In recent years, large-scale vision-language models (VLMs) have demonstrated remarkable performance on multimodal understanding and reasoning tasks. However, handling high-dimensional visual features often incurs substantial computational and memory costs. VoCo-LLaMA alleviates this issue by compressing visual patch tokens into a few VoCo tokens, reducing computational overhead while preserving strong cross-modal alignment. Nevertheless, such approaches typically adopt a fixed compression rate, limiting their ability to adapt to varying levels of visual complexity. To address this limitation, we propose Adaptive-VoCo, a framework that augments VoCo-LLaMA with a lightweight predictor for adaptive compression. This predictor dynamically selects an optimal compression rate by quantifying an image's visual complexity using statistical cues from the vision encoder, such as patch token entropy and attention map variance. Furthermore, we introduce a joint loss function that integrates rate regularization with complexity alignment. This enables the model to balance inference efficiency with representational capacity, particularly in challenging scenarios. Experimental results show that our method consistently outperforms fixed-rate baselines across multiple multimodal tasks, highlighting the potential of adaptive visual compression for creating more efficient and robust VLMs.

Xiaoyang Guo, Keze Wang• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy72.3
1165
Object Hallucination EvaluationPOPE
Accuracy81.4
935
Visual Question AnsweringGQA
Accuracy57.6
374
Multimodal UnderstandingMMBench
Accuracy60.7
367
Multimodal UnderstandingMME
MME Score1.29e+3
158
Multimodal UnderstandingSEED
Accuracy50.2
136
Science Question AnsweringScienceQA SQA-I
Accuracy68.5
81
Showing 7 of 7 rows

Other info

Follow for update