Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fine-grained Token Allocation Via Operation Pruning for Efficient MLLMs

About

Token reduction accelerates Multimodal Large Language Models (MLLMs) by reducing excessive tokens, but overlooks structural redundancy differences, where critical and redundant modules process identical token loads. For fine-grained computation control, we define an ``operation" as the computation for a module to process a group of tokens and introduce the operation pruning framework to enable modules to selectively process tokens. Built on this framework, we propose Depth-wise Operation Pruning (DOP), a data-driven method that searches for strategies to prune redundant operations and save computational budget for critical modules to process more tokens than uniform allocation by minimizing divergence from the original model's output probability distribution on a small validation set while satisfying computational constraints. For efficient optimization, DOP applies depth-wise pruning to reduce policy space and uses an additive approximation to minimize required validation runs. Depth-wise pruning partitions operations by module type and token group, and prunes operations in deeper layers before those in shallower layers within each module-group pair. The additive approximation obtains individual divergences by independently varying each policy parameter, and then sums them to approximate the joint divergence of simultaneously changing all policy parameters, reducing required validation runs from exponential to linear with respect to the number of policy parameters. Comprehensive evaluations show that DOP establishes new state-of-the-art performance across 6 MLLMs and 13 benchmarks against 12 baselines. On LLaVA-Next-7B, DOP achieves 86\% TFLOPS reduction and 83\% latency reduction on real GPU with only 1\% performance loss. Our extensive ablation studies further demonstrate DOP's data and time efficiency as well as strong generalization capabilities.

Aoming Liu, Reuben Tan, Boqing Gong, Bryan A. Plummer• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy74.7
1165
Object Hallucination EvaluationPOPE
Accuracy87.9
935
Text-based Visual Question AnsweringTextVQA
Accuracy54.5
496
Visual Question AnsweringGQA
Accuracy58.1
374
Multimodal UnderstandingMMBench--
367
Multimodal UnderstandingMMBench CN
Accuracy53.7
162
Science Question AnsweringScienceQA SQA-IMG
Accuracy69.3
114
Multimodal UnderstandingMMBench (MMB)
Accuracy60.1
69
Multimodal PerceptionMME Perception
Perception Score1.40e+3
61
Multimodal UnderstandingSEED-I Image
Accuracy0.822
40
Showing 10 of 11 rows

Other info

Follow for update