Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MoBA: Mixture of Block Attention for Long-Context LLMs

About

Scaling the effective context length is essential for advancing large language models (LLMs) toward artificial general intelligence (AGI). However, the quadratic increase in computational complexity inherent in traditional attention mechanisms presents a prohibitive overhead. Existing approaches either impose strongly biased structures, such as sink or window attention which are task-specific, or radically modify the attention mechanism into linear approximations, whose performance in complex reasoning tasks remains inadequately explored. In this work, we propose a solution that adheres to the ``less structure'' principle, allowing the model to determine where to attend autonomously, rather than introducing predefined biases. We introduce Mixture of Block Attention (MoBA), an innovative approach that applies the principles of Mixture of Experts (MoE) to the attention mechanism. This novel architecture demonstrates superior performance on long-context tasks while offering a key advantage: the ability to seamlessly transition between full and sparse attention, enhancing efficiency without the risk of compromising performance. MoBA has already been deployed to support Kimi's long-context requests and demonstrates significant advancements in efficient attention computation for LLMs. Our code is available at https://github.com/MoonshotAI/MoBA.

Enzhe Lu, Zhejun Jiang, Jingyuan Liu, Yulun Du, Tao Jiang, Chao Hong, Shaowei Liu, Weiran He, Enming Yuan, Yuzhi Wang, Zhiqi Huang, Huan Yuan, Suting Xu, Xinran Xu, Guokun Lai, Yanru Chen, Huabin Zheng, Junjie Yan, Jianlin Su, Yuxin Wu, Neo Y. Zhang, Zhilin Yang, Xinyu Zhou, Mingxing Zhang, Jiezhong Qiu• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Easy--
597
Commonsense ReasoningHellaSwag
HellaSwag Accuracy74.9
350
Boolean Question AnsweringBoolQ
Accuracy86.1
323
Question AnsweringARC Challenge
Accuracy (ARC)56.4
142
Long-context Language UnderstandingInfiniteBench
En.Sum14.56
81
Language ModelingLAMBADA
Accuracy64.6
76
Long Video UnderstandingMLVU (dev)
Score64.7
63
Long Video UnderstandingVideoMME--
40
Long-context language modelingLongBench-E 1.0 (test)
S-Doc QA Perf.46.63
37
Long-form Video UnderstandingLVBench
Overall Score42.3
35
Showing 10 of 22 rows

Other info

Follow for update