Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

About

Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI

Juzheng Zhang, Jiacheng You, Ashwinee Panda, Tom Goldstein• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy89.12
1891
Code GenerationHumanEval--
1036
Physical Commonsense ReasoningPIQA
Accuracy83.22
572
Code GenerationHumanEval (test)
Pass@143.2
506
Science Question AnsweringARC Challenge
Accuracy70.56
342
Question AnsweringBoolQ
Accuracy82.72
317
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score87.3
316
Science Question AnsweringARC Easy
Accuracy90.82
155
Social Commonsense ReasoningSocialIQA
Accuracy83.4
100
Safety AlignmentHEX-PHI
HEx-PHI Score94.7
12
Showing 10 of 10 rows

Other info

Follow for update