Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CLASP: Class-Adaptive Layer Fusion and Dual-Stage Pruning for Multimodal Large Language Models

About

Multimodal Large Language Models (MLLMs) suffer from substantial computational overhead due to the high redundancy in visual token sequences. Existing approaches typically address this issue using single-layer Vision Transformer (ViT) features and static pruning strategies. However, such fixed configurations are often brittle under diverse instructions. To overcome these limitations, we propose CLASP, a plug-and-play token reduction framework based on class-adaptive layer fusion and dual-stage pruning. Specifically, CLASP first constructs category-specific visual representations through multi-layer vision feature fusion. It then performs dual-stage pruning, allocating the token budget between attention-salient pivot tokens for relevance and redundancy-aware completion tokens for coverage. Through class-adaptive pruning, CLASP enables prompt-conditioned feature fusion and budget allocation, allowing aggressive yet robust visual token reduction. Extensive experiments demonstrate that CLASP consistently outperforms existing methods across a wide range of benchmarks, pruning ratios, and MLLM architectures. Code will be available at https://github.com/Yunkaidang/CLASP.

Yunkai Dang, Yizhu Jiang, Yifan Jiang, Qi Fan, Yinghuan Shi, Wenbin Li, Yang Gao• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy52.1
1525
Object Hallucination EvaluationPOPE
Accuracy85.8
1455
Visual Question AnsweringGQA
Accuracy63.1
1249
Text-based Visual Question AnsweringTextVQA
Accuracy61.7
807
Multimodal UnderstandingMMBench
Accuracy61.3
637
Multimodal UnderstandingMM-Vet
MM-Vet Score33.3
531
Science Question AnsweringScienceQA
Accuracy73.3
502
Multimodal UnderstandingSEED-Bench
Accuracy65.4
343
Science Question AnsweringScienceQA (SQA)
Accuracy69.6
273
Science Question AnsweringScienceQA (test)
Average Accuracy73.5
245
Showing 10 of 34 rows

Other info

Follow for update