Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FourierMoE: Fourier Mixture-of-Experts Adaptation of Large Language Models

About

Parameter-efficient fine-tuning (PEFT) has emerged as a crucial paradigm for adapting large language models (LLMs) under constrained computational budgets. However, standard PEFT methods often struggle in multi-task fine-tuning settings, where diverse optimization objectives induce task interference and limited parameter budgets lead to representational deficiency. While recent approaches incorporate mixture-of-experts (MoE) to alleviate these issues, they predominantly operate in the spatial domain, which may introduce structural redundancy and parameter overhead. To overcome these limitations, we reformulate adaptation in the spectral domain. Our spectral analysis reveals that different tasks exhibit distinct frequency energy distributions, and that LLM layers display heterogeneous frequency sensitivities. Motivated by these insights, we propose FourierMoE, which integrates the MoE architecture with the inverse discrete Fourier transform (IDFT) for frequency-aware adaptation. Specifically, FourierMoE employs a frequency-adaptive router to dispatch tokens to experts specialized in distinct frequency bands. Each expert learns a set of conjugate-symmetric complex coefficients, preserving complete phase and amplitude information while theoretically guaranteeing lossless IDFT reconstruction into real-valued spatial weights. Extensive evaluations across 28 benchmarks, multiple model architectures, and scales demonstrate that FourierMoE consistently outperforms competitive baselines in both single-task and multi-task settings while using significantly fewer trainable parameters. These results highlight the promise of spectral-domain expert adaptation as an effective and parameter-efficient paradigm for LLM fine-tuning.

Juyong Jiang, Fan Wang, Hong Qi, Sunghun Kim, Jing Tang• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationEuroSAT
Accuracy98.93
569
Image ClassificationSUN397
Accuracy66.41
441
ClassificationCars
Accuracy64.44
395
Image ClassificationRESISC45
Accuracy93.65
349
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA) (test)
BoolQ Accuracy75.26
202
Arithmetic ReasoningADDSUB
Accuracy94.68
123
Math ReasoningAQUA
Accuracy41.34
78
Math ReasoningMultiArith
Accuracy97.33
65
Math ReasoningGSM8K
Accuracy (GSM8K)78.09
49
Math ReasoningSVAMP
Accuracy91.8
40
Showing 10 of 13 rows

Other info

Follow for update