Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Next-Token Alignment: Distilling Multimodal Large Language Models via Token Interactions

About

Multimodal Large Language Models (MLLMs) demonstrate impressive cross-modal capabilities, yet their substantial size poses significant deployment challenges. Knowledge distillation (KD) is a promising solution for compressing these models, but existing methods primarily rely on static next-token alignment, neglecting the dynamic token interactions, which embed essential capabilities for multimodal understanding and generation. To this end, we introduce Align-TI, a novel KD framework designed from the perspective of Token Interactions. Our approach is motivated by the insight that MLLMs rely on two primary interactions: vision-instruction token interactions to extract relevant visual information, and intra-response token interactions for coherent generation. Accordingly, Align-TI introduces two components: IVA enables the student model to imitate the teacher's instruction-relevant visual information extract capability by aligning on salient visual regions. TPA captures the teacher's dynamic generative logic by aligning the sequential token-to-token transition probabilities. Extensive experiments demonstrate Align-TI's superiority. Notably, our approach achieves $2.6\%$ relative improvement over Vanilla KD, and our distilled Align-TI-2B even outperforms LLaVA-1.5-7B (a much larger MLLM) by $7.0\%$, establishing a new state-of-the-art distillation framework for training parameter-efficient MLLMs. Code is available at https://github.com/lchen1019/Align-TI.

Lin Chen, Xiaoke Zhao, Kun Ding, Weiwei Feng, Changtao Miao, Zili Wang, Wenxuan Guo, Ying Wang, Kaiyuan Zheng, Bo Zhang, Zhe Li, Shiming Xiang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy62.9
963
Object Hallucination EvaluationPOPE--
935
Multimodal EvaluationMME
Score75.6
557
Text-based Visual Question AnsweringTextVQA
Accuracy67.1
496
Multimodal UnderstandingMMBench
Accuracy75.2
367
Science Question AnsweringScienceQA (SQA)
Accuracy76.5
128
Showing 6 of 6 rows

Other info

Follow for update