Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Understanding Pruning Regimes in Vision-Language Models Through Domain-Aware Layer Selection

About

Transformer-based vision-language models (VLMs) contain substantial depth redundancy, yet the effect of removing specific decoder layers remains poorly understood, especially for domains that require tight coupling between perception and multi-step reasoning. We study structured decoder layer pruning through the lens of domain-aware activation similarity, measuring how strongly each layer transforms representations for math versus non-math inputs. This yields simple math-aware, non-math-aware, and mixed ranking criteria that identify layers whose input-output activations change least within a target domain. Across two state-of-the-art VLMs and a broad suite of math and general multimodal benchmarks, we uncover a consistent three-regime structure: at low pruning budgets, performance is highly sensitive to which layers are removed; at moderate budgets, methods converge as structural damage accumulates; and at high budgets, structural continuity dominates, favoring spacing-aware strategies. Our domain-aware rankings achieve the strongest stability in the ranking-sensitive regime, while matching or exceeding structure-aware baselines at larger budgets. These results provide a clearer picture of how depth contributes to domain-specific behavior in VLMs and offer a practical, interpretable approach to reducing model depth without sacrificing essential mathematical or general vision-language capabilities.

Saeed Khaki, Nima Safaei, Kamal Ginotra• 2026

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA--
502
Chart Question AnsweringChartQA--
356
Real-world Question AnsweringRealworldQA
Overall Score67.45
58
General Vision-Language UnderstandingLLaVA-OneVision
Score66.16
36
Mathematical ReasoningSnapask
Accuracy35.83
36
Real-world Visual UnderstandingRealworldQA
Score72.29
29
Fine-grained Visual PerceptionVSTAR
VStar Score82.72
18
Visual Search and ReasoningVSTAR
Score76.96
18
Mathematical ReasoningNuminaMath
Math Accuracy55.5
18
Showing 9 of 9 rows

Other info

Follow for update