Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Collaborative Multi-Mode Pruning for Vision-Language Models

About

Vision-Language Models (VLMs) have advanced rapidly within the unified Transformer architecture, yet their deployment on resource-constrained devices remains challenging due to high computational complexity. While pruning has emerged as an effective technique for compressing VLMs, existing approaches predominantly focus on a single mode by pruning either parameters or tokens, neglecting fully exploring the inherent redundancy in each mode, which leads to substantial performance degradation at high pruning ratios. To address the above limitations, we propose Collaborative Multi-Mode Pruning (CoMP), a novel framework tailored for VLMs by performing joint parameter and token pruning. Specifically, we first design a Collaborative Importance Metric (CIM) that investigates the mutual interference between the coupled parameters and tokens. It incorporates distinct significance of tokens into the computation of parameter importance scores, while simultaneously mitigating the affect of pruned parameters on token importance scores. Moreover, we develop a Multi-Mode Pruning Strategy (MPS) that decomposes the overall pruning process into a sequence of pruning stages, while in each stage we estimate the priory of different pruning modes based on their pruning cost and adaptively shift to the optimal one. Additionally, MPS integrates the historical cost and random exploration, in order to achieve a stable pruning process and avoid local optimum. Extensive experiments across various vision-language tasks and models demonstrate that our method effectively promotes the performance under high pruning ratios by comparing to the state-of-the-art approaches. The source code is available at https://github.com/Wuzimeng/CoMP.git.

Zimeng Wu, Yunhong Wang, Donghao Wang, Jiaxin Chen• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy76.5
706
Image-to-Text RetrievalFlickr30K 1K (test)
R@194.4
491
Text-to-Image RetrievalFlickr30K 1K (test)
R@180.1
432
Multimodal UnderstandingMMBench (MMB)--
141
Visual Question AnsweringGQA
GQA Score61.9
85
Visual Question AnsweringVQAv2 (test-dev)
Accuracy76.5
80
Multimodal EvaluationMME
MME Score1.84e+3
73
Visual Question AnsweringTextVQA
TextVQA Accuracy57.1
67
Image-to-Text RetrievalCOCO 5K (test)
R@176.2
47
Showing 10 of 22 rows

Other info

Follow for update