Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Continual Instruction Tuning for Large Multimodal Models

About

Instruction tuning is now a widely adopted approach to aligning large multimodal models (LMMs) to follow human intent. It unifies the data format of vision-language tasks, enabling multi-task joint training. However, vision-language tasks are constantly being created in practice. Instead of always re-training LMMs when new tasks arrive, continual learning offers flexibility for models to continually and efficiently exploit the evolving data. This work aims to explore the following two questions: 1) Do LMMs still suffer from catastrophic forgetting in continual instruction tuning? 2) Are the existing three classes of continual learning methods still applicable to the continual instruction tuning of LMMs? An extensive study is conducted to address the above questions. First, we establish the first benchmark in this setting and reveal that catastrophic forgetting is still observed when continually instruction-tuning LMMs. However, the multi-task joint instruction tuning can facilitate the model's continual learning ability and mitigate forgetting. Second, we integrate and adapt classic continual learning methods to our context, demonstrating the efficacy of data replay and model expansion strategies across diverse scenarios. In contrast, regularization-based methods only perform well on models that have been jointly instruction-tuned on multiple tasks. Third, we delve into the correlation and forgetting dynamics between vision-language task pairs and propose task-similarity-informed regularization and model expansion methods for continual instruction tuning of LMMs. Experimental results show that our approach consistently boosts the model's performance.

Jinghan He, Haiyun Guo, Ming Tang, Jinqiao Wang• 2023

Related benchmarks

TaskDatasetResultRank
Knowledge Unlearning16-task Sequential Unlearning Forgotten Data Avg
CRR53.94
18
Knowledge Unlearning16-task Sequential Unlearning Forgotten Data Last
Context-aware Refusal Rate (CRR)59.87
16
Knowledge RetentionSequential Unlearning 16-task Retained Data Avg
Specificity77.65
9
Knowledge Retention16-task Sequential Unlearning Retained Data Last
Specificity75.08
8
Showing 4 of 4 rows

Other info

Follow for update