Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fast-Slow Efficient Training for Multimodal Large Language Models via Visual Token Pruning

About

Multimodal Large Language Models (MLLMs) suffer from severe training inefficiency issue, which is associated with their massive model sizes and visual token numbers. Existing efforts in efficient training focus on reducing model sizes or trainable parameters. Inspired by the success of Visual Token Pruning (VTP) in improving inference efficiency, we are exploring another substantial research direction for efficient training by reducing visual tokens. However, applying VTP at the training stage results in a training-inference mismatch: pruning-trained models perform poorly when inferring on non-pruned full visual token sequences. To close this gap, we propose DualSpeed, a fast-slow framework for efficient training of MLLMs. The fast-mode is the primary mode, which incorporates existing VTP methods as plugins to reduce visual tokens, along with a mode isolator to isolate the model's behaviors. The slow-mode is the auxiliary mode, where the model is trained on full visual sequences to retain training-inference consistency. To boost its training, it further leverages self-distillation to learn from the sufficiently trained fast-mode. Together, DualSpeed can achieve both training efficiency and non-degraded performance. Experiments show DualSpeed accelerates the training of LLaVA-1.5 by 2.1$\times$ and LLaVA-NeXT by 4.0$\times$, retaining over 99% performance. Code: https://github.com/dingkun-zhang/DualSpeed

Dingkun Zhang, Shuhan Qi, Yulin Wu, Xinyu Xiao, Xuan Wang, Long Chen• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy57.42
1117
Object Hallucination EvaluationPOPE
Accuracy86.57
935
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy78.2
664
Multimodal EvaluationMME
Score1.49e+3
557
Visual Question AnsweringGQA
Accuracy62.04
374
Science Question AnsweringScienceQA (SQA)
Accuracy69.66
128
Multimodal BenchmarkMMBench (MMB)
Accuracy65.98
70
Multimodal EvaluationMMBench CN
Accuracy57.13
57
Multimodal BenchmarkSEED
Accuracy59.72
7
Showing 9 of 9 rows

Other info

Follow for update