Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GAIN: Multiplicative Modulation for Domain Adaptation

About

Adapting LLMs to new domains causes forgetting because standard methods (full fine-tuning, LoRA) inject new directions into the weight space. We propose GAIN, which re-emphasizes existing features through multiplicative modulation W_new = S * W. The learned diagonal matrix S is applied to the attention output projection and optionally the FFN. The principle mirrors gain modulation in neuroscience, where neurons adapt to context by scaling response strength while preserving selectivity. We evaluate GAIN on five models from four families (774M to 70B), adapting sequentially across eight domains. GAIN-FFN matches LoRA's in-domain adaptation, but their effects on previously trained domains are opposite: GAIN-FFN improves them by 7-13% (validation PPL), while LoRA degrades them by 18-36%. Downstream accuracy confirms the pattern: for example, after seven sequential adaptations on Qwen2.5, GAIN-FFN degrades BoolQ by only 0.8% while LoRA damages it by 14.9%. GAIN adds 46K-230K parameters per model and can be absorbed into the pretrained weights for zero inference cost.

Hengshuai Yao, Xing Chen, Ahmed Murtadha, Guan Wang• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Commonsense ReasoningWinoGrande--
1085
Physical Commonsense ReasoningPIQA--
572
Sentence CompletionHellaSwag--
276
Language ModelingPG-19--
160
Question AnsweringOpenBookQA
Normalized Accuracy0.4
102
Question AnsweringARC-C--
87
Language ModelingMedical (Med)
PPL Change (%) vs Baseline0.8
30
Language ModelingFinance (Fin)
PPL Change (%)0.00e+0
28
Language ModelingWikiText-103
Delta PPL0.00e+0
25
Showing 10 of 25 rows

Other info

Follow for update