Fine-grained Token Allocation Via Operation Pruning for Efficient MLLMs
About
Token reduction accelerates Multimodal Large Language Models (MLLMs) by reducing excessive tokens, but overlooks structural redundancy differences, where critical and redundant modules process identical token loads. For fine-grained computation control, we define an ``operation" as the computation for a module to process a group of tokens and introduce the operation pruning framework to enable modules to selectively process tokens. Built on this framework, we propose Depth-wise Operation Pruning (DOP), a data-driven method that searches for strategies to prune redundant operations and save computational budget for critical modules to process more tokens than uniform allocation by minimizing divergence from the original model's output probability distribution on a small validation set while satisfying computational constraints. For efficient optimization, DOP applies depth-wise pruning to reduce policy space and uses an additive approximation to minimize required validation runs. Depth-wise pruning partitions operations by module type and token group, and prunes operations in deeper layers before those in shallower layers within each module-group pair. The additive approximation obtains individual divergences by independently varying each policy parameter, and then sums them to approximate the joint divergence of simultaneously changing all policy parameters, reducing required validation runs from exponential to linear with respect to the number of policy parameters. Comprehensive evaluations show that DOP establishes new state-of-the-art performance across 6 MLLMs and 13 benchmarks against 12 baselines. On LLaVA-Next-7B, DOP achieves 86\% TFLOPS reduction and 83\% latency reduction on real GPU with only 1\% performance loss. Our extensive ablation studies further demonstrate DOP's data and time efficiency as well as strong generalization capabilities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 | Accuracy74.7 | 1165 | |
| Object Hallucination Evaluation | POPE | Accuracy87.9 | 935 | |
| Text-based Visual Question Answering | TextVQA | Accuracy54.5 | 496 | |
| Visual Question Answering | GQA | Accuracy58.1 | 374 | |
| Multimodal Understanding | MMBench | -- | 367 | |
| Multimodal Understanding | MMBench CN | Accuracy53.7 | 162 | |
| Science Question Answering | ScienceQA SQA-IMG | Accuracy69.3 | 114 | |
| Multimodal Understanding | MMBench (MMB) | Accuracy60.1 | 69 | |
| Multimodal Perception | MME Perception | Perception Score1.40e+3 | 61 | |
| Multimodal Understanding | SEED-I Image | Accuracy0.822 | 40 |