Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Context Scaling with LongCat ZigZag Attention

About

We introduce LongCat ZigZag Attention (LoZA), which is a sparse attention scheme designed to transform any existing full-attention models into sparse versions with rather limited compute budget. In long-context scenarios, LoZA can achieve significant speed-ups both for prefill-intensive (e.g., retrieval-augmented generation) and decode-intensive (e.g., tool-integrated reasoning) cases. Specifically, by applying LoZA to LongCat-Flash during mid-training, we serve LongCat-Flash-Exp as a long-context foundation model that can swiftly process up to 1 million tokens, enabling efficient long-term reasoning and long-horizon agentic capabilities.

Chen Zhang, Yang Bai, Jiahuan Li, Anchun Gui, Keheng Wang, Feifan Liu, Guanyu Wu, Yuwei Jiang, Defei Bu, Li Wei, Haihang Jing, Hongyin Tang, Xin Chen, Xiangzhou Huang, Fengcun Li, Rongxiang Weng, Yulei Qian, Yifan Lu, Yerui Sun, Jingang Wang, Yuchen Xie, Xunliang Cai• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Code GenerationHumanEval+
Pass@187.2
383
General KnowledgeMMLU
MMLU General Knowledge Accuracy89.6
234
Code GenerationMBPP+
Pass@179.1
216
Long-context UnderstandingLongBench v2--
109
MathMATH 500
Accuracy98.8
86
Multilingual Mathematical ReasoningMGSM
Accuracy94.6
52
Code GenerationFullStackBench
Pass@164.1
45
General KnowledgeCMMLU
Accuracy87.5
25
Multilingual KnowledgeMMMLU
Accuracy85.2
18
Showing 10 of 24 rows

Other info

Follow for update