Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition

About

While Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA have effectively addressed GPU memory constraints during fine-tuning, their performance often falls short, especially in multidimensional task scenarios. To address this issue, one straightforward solution is to introduce task-specific LoRA modules as domain experts, leveraging the modeling of multiple experts' capabilities and thus enhancing the general capability of multi-task learning. Despite promising, these additional components often add complexity to the training and inference process, contravening the efficient characterization of PEFT designed for. Considering this, we introduce an innovative PEFT method, TeamLoRA, consisting of a collaboration and competition module for experts, and thus achieving the right balance of effectiveness and efficiency: (i) For collaboration, a novel knowledge-sharing and -organizing mechanism is devised to appropriately reduce the scale of matrix operations, thereby boosting the training and inference speed. (ii) For competition, we propose leveraging a game-theoretic interaction mechanism for experts, encouraging experts to transfer their domain-specific knowledge while facing diverse downstream tasks, and thus enhancing the performance. By doing so, TeamLoRA elegantly connects the experts as a "Team" with internal collaboration and competition, enabling a faster and more accurate PEFT paradigm for multi-task learning. To validate the superiority of TeamLoRA, we curate a comprehensive multi-task evaluation(CME) benchmark to thoroughly assess the capability of multi-task learning. Experiments conducted on our CME and other benchmarks indicate the effectiveness and efficiency of TeamLoRA. Our project is available at https://github.com/Lin-Tianwei/TeamLoRA.

Tianwei Lin, Jiang Liu, Wenqiao Zhang, Zhaocheng Li, Yang Dai, Haoyuan Li, Zhelun Yu, Wanggui He, Juncheng Li, Hao Jiang, Siliang Tang, Yueting Zhuang• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy49.4
1525
Object Hallucination EvaluationPOPE
Accuracy85.3
1455
Multimodal Capability EvaluationMM-Vet
Score31.2
345
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score88.5
316
Multimodal UnderstandingSEED
Accuracy60
183
Multimodal Perception and CognitionMME
Overall Score1.51e+3
182
Text-based Visual Question AnsweringTextVQA (VQA^T)
Accuracy57.1
96
Natural Language UnderstandingCME benchmark
OAI Sum27.6
13
Scientific Question AnsweringScienceQA I
Accuracy68.7
8
Multimodal ReasoningMMB-CN
Accuracy54
3
Showing 10 of 11 rows

Other info

Code

Follow for update