Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CAPA: Contribution-Aware Pruning and FFN Approximation for Efficient Large Vision-Language Models

About

Efficient inference in Large Vision-Language Models is constrained by the high cost of processing thousands of visual tokens, yet it remains unclear which tokens and computations can be safely removed. While attention scores are commonly used to estimate visual token importance, they are an imperfect proxy for actual contribution. We show that Attention Contribution, which weights attention probabilities by value vector magnitude, provides a more accurate criterion for visual token selection. Our empirical analysis reveals that visual attention sinks are functionally heterogeneous, comprising Probability Dumps with low contribution that can be safely pruned, and Structural Anchors with high contribution essential for maintaining model performance. Further, we identify substantial redundancy in Feed-Forward Networks (FFNs) associated with visual tokens, particularly in intermediate layers where image tokens exhibit linear behavior. Based on our findings, we introduce CAPA (Contribution-Aware Pruning and FFN Approximation), a dual-strategy framework that prunes visual tokens using attention contribution at critical functional transitions and reduces FFN computation through efficient linear approximations. Experiments on various benchmarks across baselines show that CAPA achieves competent efficiency--performance trade-offs with improved robustness.

Samyak Jha, Junho Kim• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy84.3
1165
Multimodal ReasoningMM-Vet
MM-Vet Score69.85
281
Multi-discipline ReasoningMMMU
Accuracy53.67
16
OCR Visual Question AnsweringTextVQA
Accuracy82.3
10
Holistic PerceptionMMBench
Accuracy84.79
6
Holistic PerceptionSEED-Bench
Accuracy75.3
6
Showing 6 of 6 rows

Other info

Follow for update