Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Swimba: Switch Mamba Model Scales State Space Models

About

Mixture-of-experts (MoE) is a common approach for increasing parameter capacity, but applying MoE to state space model (SSM) token mixers can multiply the cost of the recurrent state update. We study how to introduce expert specialization into selective SSMs while preserving computational efficiency. We show that MoE--SSM can refer to two designs: (1) MoE over separated SSMs, which maintains multiple state trajectories and thus scales compute with the number of experts; and (2) MoE-parameterized SSM, which mixes experts in parameter space, maintains a single state trajectory, and evaluates the recurrence once. Our method, Switch Mamba (Swimba), follows the second design by routing over expert-produced SSM streams. Theoretically, we establish well-definedness and stability for MoE-parameterized SSMs and characterize the relationship between the two designs. Empirically, we evaluate Swimba on standard benchmark tasks and measure real-time throughput and latency. Under matched FLOPs, Swimba achieves slightly better average performance than the baseline, with a small slowdown in real-time latency and throughput. Overall, these results suggest that parameter-space MoE can increase SSM capacity while keeping the dominant recurrence cost fixed.

Zhixu Du, Krishna Teja Chitty-Venkata, Murali Emani, Venkatram Vishwanath, Hai Helen Li, Yiran Chen• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy79.1
1085
Question AnsweringARC Challenge
Accuracy59.5
906
Language UnderstandingMMLU
Accuracy75
825
Question AnsweringBoolQ--
317
Recognizing Textual EntailmentRTE
Accuracy73.2
47
Question AnsweringARC Easy
Normalized Accuracy84.1
18
Commonsense ReasoningPIQA
Normalized Accuracy0.825
13
Question AnsweringOpenBookQA
Accuracy34.9
7
Commonsense ReasoningHellaSwag
Accuracy64.2
6
Showing 9 of 9 rows

Other info

Follow for update