Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

NeuroLoRA: Context-Aware Neuromodulation for Parameter-Efficient Multi-Task Adaptation

About

Parameter-Efficient Fine-Tuning (PEFT) techniques, particularly Low-Rank Adaptation (LoRA), have become essential for adapting Large Language Models (LLMs) to downstream tasks. While the recent FlyLoRA framework successfully leverages bio-inspired sparse random projections to mitigate parameter interference, it relies on a static, magnitude-based routing mechanism that is agnostic to input context. In this paper, we propose NeuroLoRA, a novel Mixture-of-Experts (MoE) based LoRA framework inspired by biological neuromodulation -- the dynamic regulation of neuronal excitability based on context. NeuroLoRA retains the computational efficiency of frozen random projections while introducing a lightweight, learnable neuromodulation gate that contextually rescales the projection space prior to expert selection. We further propose a Contrastive Orthogonality Loss to explicitly enforce separation between expert subspaces, enhancing both task decoupling and continual learning capacity. Extensive experiments on MMLU, GSM8K, and ScienceQA demonstrate that NeuroLoRA consistently outperforms FlyLoRA and other strong baselines across single-task adaptation, multi-task model merging, and sequential continual learning scenarios, while maintaining comparable parameter efficiency.

Yuxin Yang, Haoran Zhang, Mingxuan Li, Jiachen Xu, Ruoxi Shen, Zhenyu Wang, Tianhao Liu, Siqi Chen, Weilin Huang• 2026

Related benchmarks

TaskDatasetResultRank
Math Word Problem SolvingGSM8K
Accuracy61.2
87
Language UnderstandingMMLU
Accuracy66.3
34
Science Question AnsweringScienceQA text-only
Accuracy95.5
7
Continual LearningMMLU -> ScienceQA -> GSM8K
MMLU Accuracy62.1
5
Multi-task Model MergingMMLU, SciQA, and GSM8K (test)
Average (Individual)74.3
4
Showing 5 of 5 rows

Other info

Follow for update