Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning

About

Recent large reasoning models (LRMs) have demonstrated strong reasoning capabilities through reinforcement learning (RL). These improvements have primarily been observed within the short-context reasoning tasks. In contrast, extending LRMs to effectively process and reason on long-context inputs via RL remains a critical unsolved challenge. To bridge this gap, we first formalize the paradigm of long-context reasoning RL, and identify key challenges in suboptimal training efficiency and unstable optimization process. To address these issues, we propose QwenLong-L1, a framework that adapts short-context LRMs to long-context scenarios via progressive context scaling. Specifically, we utilize a warm-up supervised fine-tuning (SFT) stage to establish a robust initial policy, followed by a curriculum-guided phased RL technique to stabilize the policy evolution, and enhanced with a difficulty-aware retrospective sampling strategy to incentivize the policy exploration. Experiments on seven long-context document question-answering benchmarks demonstrate that QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking, demonstrating leading performance among state-of-the-art LRMs. This work advances the development of practical long-context LRMs capable of robust reasoning across information-intensive environments.

Fanqi Wan, Weizhou Shen, Shengyi Liao, Yingcheng Shi, Chenliang Li, Ziyi Yang, Ji Zhang, Fei Huang, Jingren Zhou, Ming Yan• 2025

Related benchmarks

TaskDatasetResultRank
Long-context Question AnsweringHotpotQA In-Distribution
Accuracy85.2
72
Multi-hop Question Answering2WikiMultiHopQA Out-Of-Distribution (OOD)
Accuracy74.2
72
Question AnsweringRULER-HQA 7K context length
Sub-EM Accuracy0.7266
11
Question AnsweringRULER-HQA 14K context length
Normalized Sub-EM Accuracy0.75
11
Question AnsweringRULER HQA 28K context length
Normalized Sub-EM Accuracy72.66
11
Question AnsweringRULER-HQA 56K
Normalized Sub-EM Accuracy60.94
11
Question AnsweringRULER-HQA 448K
Normalized Sub-EM Accuracy13.28
11
Question AnsweringRULER-HQA 896K
Normalized sub-EM Accuracy11.72
11
Question AnsweringRULER-HQA 112K context length
Normalized Sub-EM Accuracy31.25
11
Question AnsweringRULER-HQA 224K context length
Normalized Sub-EM Accuracy17.19
11
Showing 10 of 10 rows

Other info

Follow for update