Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of Multimodal Large Language Model

About

Instruction tuning is widely used to improve a pre-trained Multimodal Large Language Model (MLLM) by training it on curated task-specific datasets, enabling better comprehension of human instructions. However, it is infeasible to collect all possible instruction datasets simultaneously in real-world scenarios. Thus, enabling MLLM with continual instruction tuning is essential for maintaining their adaptability. However, existing methods often trade off memory efficiency for performance gains, significantly compromising overall efficiency. In this paper, we propose a task-specific expansion and task-general fusion framework based on the variations in Centered Kernel Alignment (CKA) similarity across different model layers when trained on diverse datasets. Furthermore, we analyze the information leakage present in the existing benchmark and propose a new and more challenging benchmark to rationally evaluate the performance of different methods. Comprehensive experiments showcase a significant performance improvement of our method compared to existing state-of-the-art methods. Code and dataset are released at https://github.com/Ghy0501/HiDe-LLaVA.

Haiyang Guo, Fanhu Zeng, Ziwei Xiang, Fei Zhu, Da-Han Wang, Xu-Yao Zhang, Cheng-Lin Liu• 2025

Related benchmarks

TaskDatasetResultRank
Continual Visual Question AnsweringVQA v2 (test)
Rec. Accuracy49.27
23
Continual Instruction TuningUCIT
Image-R Score89.33
20
Continual Instruction TuningMLLM-DCL
RS Score77.73
20
Continual LearningMLLM-CL Ability
OCR Score24.6
17
Domain-incremental learningMLLM-CL Domain
RS Score74.8
17
Continual LearningMLLM-CL (test)
RS Score74.3
13
Multimodal Continual Instruction TuningMLLM-CTBENCH
Math QA Accuracy26.85
12
Multimodal Continual LearningMLLM-CTBENCH
Accuracy (Math QA)42.12
12
Multimodal Instruction FollowingCOIN
SciQA Score73.2
9
Continual Instruction TuningUCIT 1.0 (test)
ImageNet-R Score84.03
6
Showing 10 of 10 rows

Other info

Follow for update