Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models

About

This work explores the challenge of building ``Machines that Can Remember'', framing long-term memory as the problem of efficient ultra-long context modeling. We argue that this requires three key properties: \textbf{sparsity}, \textbf{random-access flexibility}, and \textbf{length generalization}. To address ultra-long-context modeling, we leverage Hierarchical Sparse Attention (HSA), a novel attention mechanism that satisfies all three properties. We integrate HSA into Transformers to build HSA-UltraLong, which is an 8B-parameter MoE model trained on over 8 trillion tokens and is rigorously evaluated on different tasks with in-domain and out-of-domain context lengths to demonstrate its capability in handling ultra-long contexts. Results show that our model performs comparably to full-attention baselines on in-domain lengths while achieving over 90\% accuracy on most in-context retrieval tasks with contexts up to 16M. This report outlines our experimental insights and open problems, contributing a foundation for future research in ultra-long context modeling.

Xiang Hu, Zhanchao Zhou, Ruiqi Liang, Zehuan Li, Wei Wu, Jianguo Li• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy67.43
1460
Language UnderstandingMMLU
Accuracy60.71
756
Question AnsweringARC Challenge
Accuracy71.53
749
Mathematical ReasoningMATH
Accuracy48
643
Mathematical ReasoningGSM8K
Accuracy (GSM8K)72.93
358
Physical Commonsense ReasoningPIQA
Accuracy80.69
329
Instruction FollowingIFEval--
292
Code GenerationHumanEval+--
189
Code GenerationMBPP+
Accuracy62.17
75
Chinese Multitask Language UnderstandingCMMLU
Accuracy64.41
50
Showing 10 of 19 rows

Other info

Follow for update