Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Qwen2.5-1M Technical Report

About

We introduce Qwen2.5-1M, a series of models that extend the context length to 1 million tokens. Compared to the previous 128K version, the Qwen2.5-1M series have significantly enhanced long-context capabilities through long-context pre-training and post-training. Key techniques such as long data synthesis, progressive pre-training, and multi-stage supervised fine-tuning are employed to effectively enhance long-context performance while reducing training costs. To promote the use of long-context models among a broader user base, we present and open-source our inference framework. This framework includes a length extrapolation method that can expand the model context lengths by at least four times, or even more, without additional training. To reduce inference costs, we implement a sparse attention method along with chunked prefill optimization for deployment scenarios and a sparsity refinement method to improve precision. Additionally, we detail our optimizations in the inference engine, including kernel optimization, pipeline parallelism, and scheduling optimization, which significantly enhance overall inference performance. By leveraging our inference framework, the Qwen2.5-1M models achieve a remarkable 3x to 7x prefill speedup in scenarios with 1 million tokens of context. This framework provides an efficient and powerful solution for developing applications that require long-context processing using open-source models. The Qwen2.5-1M series currently includes the open-source models Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, as well as the API-accessed model Qwen2.5-Turbo. Evaluations show that Qwen2.5-1M models have been greatly improved in long-context tasks without compromising performance in short-context scenarios. Specifically, the Qwen2.5-14B-Instruct-1M model significantly outperforms GPT-4o-mini in long-context tasks and supports contexts eight times longer.

An Yang, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoyan Huang, Jiandong Jiang, Jianhong Tu, Jianwei Zhang, Jingren Zhou, Junyang Lin, Kai Dang, Kexin Yang, Le Yu, Mei Li, Minmin Sun, Qin Zhu, Rui Men, Tao He, Weijia Xu, Wenbiao Yin, Wenyuan Yu, Xiafei Qiu, Xingzhang Ren, Xinlong Yang, Yong Li, Zhiying Xu, Zipeng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K--
351
Instruction FollowingIFEval
Accuracy (0-100)81.9
292
Question AnsweringGPQA
Accuracy44.6
258
Question AnsweringPopQA
Accuracy28
186
Long-context UnderstandingLongBench
Overall Average Score47.6
115
Long-context Question AnsweringHotpotQA In-Distribution
Accuracy83.6
72
Multi-hop Question Answering2WikiMultiHopQA Out-Of-Distribution (OOD)
Accuracy62.5
72
General ReasoningBIG-Bench Hard
Accuracy80.9
68
Question AnsweringMMLU
Accuracy84.6
62
Long-context Question Answering2WikiMultiHopQA (Out-Of-Distribution)
Accuracy62.5
54
Showing 10 of 69 rows

Other info

Follow for update