Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Preference-Aligned LoRA Merging: Preserving Subspace Coverage and Addressing Directional Anisotropy

About

Merging multiple Low-Rank Adaptation (LoRA) modules is promising for constructing general-purpose systems, yet challenging because LoRA update directions span different subspaces and contribute unevenly. When merged naively, such mismatches can weaken the directions most critical to certain task losses while overemphasizing relatively less important ones, ultimately reducing the model's ability to represent all tasks faithfully. We revisit this problem through two perspectives: subspace coverage, which captures how broadly LoRA directions cover diverse representational directions, and anisotropy, which reflects the imbalance of influence across those directions. We propose TARA-Merging (Task-Rank Anisotropy Alignment), which aligns merging weights using a preference-weighted cross-entropy pseudo-loss while preserving task-relevant LoRA subspaces. This ensures broad subspace coverage and mitigates anisotropy via direction-wise reweighting. Across eight vision and six NLI benchmarks, TARA-Merging consistently outperforms vanilla and LoRA-aware baselines, demonstrating strong robustness and generalization, and highlighting the importance of addressing both subspace coverage and anisotropy in LoRA merging.

Wooseong Jeong, Wonyoung Lee, Kuk-Jin Yoon• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy79.2
1525
Visual Question AnsweringChartQA--
371
Visual Question AnsweringScienceQA--
370
Image Classification8-task vision benchmark
Average Accuracy80.5
64
Visual Question AnsweringIconQA
Top-1 Acc69.5
57
Image ClassificationDTD
Average Accuracy76.3
19
Image ClassificationEuroSAT
Average Accuracy76.3
19
Image ClassificationGTSRB
Average Accuracy76.3
18
Image ClassificationMNIST
Average Accuracy76.3
18
Image ClassificationSUN397
Average Accuracy76.3
18
Showing 10 of 17 rows

Other info

Follow for update