Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Low-Rank Interconnected Adaptation across Layers

About

Low-rank adaptation (LoRA) is a widely used parameter-efficient fine-tuning (PEFT) method that learns weight updates $\Delta W = AB$ for pretrained weights $W$ through low-rank adapters $A$ and $B$. While LoRA ensures hardware efficiency, its low-rank weight updates limit adaptation performance. In this paper, we propose low-rank interconnected adaptation across layers (Lily), a novel PEFT method that introduces an interconnected framework with locally shared $A$ and globally shared $B$ experts. This structure eliminates redundant per-layer $AB$ pairs, enabling higher-rank $\Delta W$ with equal or fewer parameters. To enhance expressiveness, we use data-dependent routers to determine $A$-$B$ interconnections, preventing $B$ experts from converging to the same behavior and improving representational power across domains. Experiments across modalities, architectures, and model sizes demonstrate Lily's superior performance and efficiency. GitHub: https://github.com/yibozhong/lily

Yibo Zhong, Jinman Zhao, Yao Zhou• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationVTAB 1K
Overall Mean Accuracy72.3
204
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA) (test)
BoolQ Accuracy72.9
138
Visual Task AdaptationVTAB 1K
Average Accuracy77.3
78
Natural Language UnderstandingGLUE
COLA Score68.4
41
Showing 4 of 4 rows

Other info

Follow for update