Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition

About

Low-rank adaptations (LoRA) are often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a simple framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks. With just a few examples from a new task, LoraHub can fluidly combine multiple LoRA modules, eliminating the need for human expertise and assumptions. Notably, the composition requires neither additional model parameters nor gradients. Empirical results on the Big-Bench Hard benchmark suggest that LoraHub, while not surpassing the performance of in-context learning, offers a notable performance-efficiency trade-off in few-shot scenarios by employing a significantly reduced number of tokens per example during inference. Notably, LoraHub establishes a better upper bound compared to in-context learning when paired with different demonstration examples, demonstrating its potential for future development. Our vision is to establish a platform for LoRA modules, empowering users to share their trained LoRA modules. This collaborative approach facilitates the seamless application of LoRA modules to novel tasks, contributing to an adaptive ecosystem. Our code is available at https://github.com/sail-sg/lorahub, and all the pre-trained LoRA modules are released at https://huggingface.co/lorahub.

Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, Min Lin• 2023

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval Multilingual (test)
Average Score20.3
52
Mathematical ReasoningMGSM (test)
Accuracy (MGSM)28.7
29
Diverse Language Understanding62 downstream tasks
Average Accuracy66.3
18
SummarizationNLP Paper
BERTScore82.4
10
SummarizationSciTLDR (test)
RL Score35.63
10
SummarizationSciTLDR
BERTScore0.78
10
SummarizationBloomberg (test)
RL Score28.13
10
SummarizationBloomberg
BERTScore0.726
10
SummarizationMIMIC-III (test)
RL Score27.9
10
SummarizationNLP Paper (test)
BLEU21
10
Showing 10 of 19 rows

Other info

Follow for update