Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FlyLoRA: Boosting Task Decoupling and Parameter Efficiency via Implicit Rank-Wise Mixture-of-Experts

About

Low-Rank Adaptation (LoRA) is a widely used parameter-efficient fine-tuning method for foundation models, but it suffers from parameter interference, resulting in suboptimal performance. Although Mixture-of-Experts (MoE)-based LoRA variants show promise in mitigating intra-task correlations in single-task instruction tuning, they introduce additional router parameters and remain ineffective in multi-task model merging where inter-task interference arises. Inspired by the fly olfactory circuit, we propose FlyLoRA, an implicit MoE-based LoRA variant that introduces: (1) rank-wise expert activation in the up-projection matrix, and (2) an implicit router that unifies expert routing and down-projection, where a frozen sparse random projection matrix replaces the traditional dense trainable version. This design resolves the trade-off between intra-task decorrelation and computational efficiency by eliminating the need for an explicit router, while inherently mitigating inter-task interference due to the orthogonality property of random matrices. Extensive experiments across four domains -- general knowledge understanding, scientific question answering, mathematical reasoning, and code generation -- demonstrate consistent performance improvements over existing methods. Beyond empirical gains, FlyLoRA highlights how biological structures can inspire innovations in AI technologies. Code is available at https://github.com/gfyddha/FlyLoRA.

Heming Zou, Yunliang Zang, Wutong Xu, Yao Zhu, Xiangyang Ji• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval (test)
Pass@132.32
506
Math Word Problem SolvingGSM8K
Accuracy58.7
87
Language UnderstandingMMLU
Accuracy65.1
34
Mathematical ReasoningMathematical Reasoning Benchmarks (AddSub, AQuA, GSM8k, MultiArith, SingleEq, SVAMP) (test)
Accuracy (AddSub)95.27
18
Science Question AnsweringScienceQA text-only
Accuracy94.1
7
Multi-task Model MergingMMLU, SciQA, and GSM8K (test)
Average (Individual)72.6
4
Showing 6 of 6 rows

Other info

Follow for update