Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TAMP: Token-Adaptive Layerwise Pruning in Multimodal Large Language Models

About

Multimodal Large Language Models (MLLMs) have shown remarkable versatility in understanding diverse multimodal data and tasks. However, these capabilities come with an increased model scale. While post-training pruning reduces model size in unimodal models, its application to MLLMs often yields limited success. Our analysis discovers that conventional methods fail to account for the unique token attributes across layers and modalities inherent to MLLMs. Inspired by this observation, we propose TAMP, a simple yet effective pruning framework tailored for MLLMs, featuring two key components: (1) Diversity-Aware Sparsity, which adjusts sparsity ratio per layer based on diversities among multimodal output tokens, preserving more parameters in high-diversity layers; and (2) Adaptive Multimodal Input Activation, which identifies representative multimodal input tokens using attention scores to guide unstructured weight pruning. We validate our method on two state-of-the-art MLLMs: LLaVA-NeXT, designed for vision-language tasks, and VideoLLaMA2, capable of processing audio, visual, and language modalities. Empirical experiments across various multimodal evaluation benchmarks demonstrate that each component of our approach substantially outperforms existing pruning techniques.

Jaewoo Lee, Keyang Xuan, Chanakya Ekbote, Sandeep Polisetty, Yi R. Fung, Paul Pu Liang• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy63.65
1525
Object Hallucination EvaluationPOPE
Accuracy88.2
1455
Visual Question AnsweringTextVQA--
1285
Text-based Visual Question AnsweringTextVQA
Accuracy60.08
807
Multimodal EvaluationMME--
658
Multimodal UnderstandingMMBench--
637
Video UnderstandingMVBench
Accuracy42.6
425
Science Question AnsweringScienceQA IMG
Accuracy78.58
294
Video UnderstandingVideoMME
Overall Score49.3
222
Video UnderstandingEgoSchema
EgoSchema Score53.9
158
Showing 10 of 35 rows

Other info

Code

Follow for update