Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning

About

Instruction tuning of Large Vision-language Models (LVLMs) has revolutionized the development of versatile models with zero-shot generalization across a wide range of downstream vision-language tasks. However, the diversity of training tasks of different sources and formats would lead to inevitable task conflicts, where different tasks conflict for the same set of model parameters, resulting in sub-optimal instruction-following abilities. To address that, we propose the Mixture of Cluster-conditional LoRA Experts (MoCLE), a novel Mixture of Experts (MoE) architecture designed to activate the task-customized model parameters based on the instruction clusters. A separate universal expert is further incorporated to improve generalization capabilities of MoCLE for novel instructions. Extensive experiments on InstructBLIP and LLaVA demonstrate the effectiveness of MoCLE.

Yunhao Gou, Zhili Liu, Kai Chen, Lanqing Hong, Hang Xu, Aoxue Li, Dit-Yan Yeung, James T. Kwok, Yu Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningMMT-47 Commonsense Reasoning
Accuracy84.18
17
Image ClassificationMMT-47
Accuracy94.25
17
Vision UnderstandingMMT-47 Vision Benchmark
Accuracy77.59
17
Action UnderstandingMMT-47 Action Understanding
Accuracy51.53
17
Object Motion and Spatial ReasoningMMT-47 Object Motion & Spatial
Accuracy63.43
17
High-Level ReasoningMMT-47 High Level Reasoning
Accuracy43.78
17
Natural Language UnderstandingGLUE
Accuracy90.66
17
Showing 7 of 7 rows

Other info

Follow for update