Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

About

Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI

Juzheng Zhang, Jiacheng You, Ashwinee Panda, Tom Goldstein• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy89.12
1460
Code GenerationHumanEval--
850
Physical Commonsense ReasoningPIQA
Accuracy83.22
329
Question AnsweringBoolQ
Accuracy82.72
240
Science Question AnsweringARC Challenge
Accuracy70.56
234
Science Question AnsweringARC Easy
Accuracy90.82
101
Social Commonsense ReasoningSocialIQA
Accuracy83.4
68
Showing 7 of 7 rows

Other info

Follow for update