Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Punctuation-aware Hybrid Trainable Sparse Attention for Large Language Models

About

Attention serves as the fundamental mechanism for long-context modeling in large language models (LLMs), yet dense attention becomes structurally prohibitive for long sequences due to its quadratic complexity. Consequently, sparse attention has received increasing attention as a scalable alternative. However, existing sparse attention methods rely on coarse-grained semantic representations during block selection, which blur intra-block semantic boundaries and lead to the loss of critical information. To address this issue, we propose \textbf{P}unctuation-aware \textbf{H}ybrid \textbf{S}parse \textbf{A}ttention \textbf{(PHSA)}, a natively trainable sparse attention framework that leverages punctuation tokens as semantic boundary anchors. Specifically, (1) we design a dual-branch aggregation mechanism that fuses global semantic representations with punctuation-enhanced boundary features, preserving the core semantic structure while introducing almost no additional computational overhead; (2) we introduce an extreme-sparsity-adaptive training and inference strategy that stabilizes model behavior under very low token activation ratios; Extensive experiments on general benchmarks and long-context evaluations demonstrate that PHSA consistently outperforms dense attention and state-of-the-art sparse attention baselines, including InfLLM v2. Specifically, for the 0.6B-parameter model with 32k-token input sequences, PHSA can reduce the information loss by 10.8\% at a sparsity ratio of 97.3\%.

Junxiang Qiu, Shuo Wang, Zhengsu Chen, Hengheng Zhang, Jinda Lu, Changcheng Li, Qi Tian• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Code GenerationHumanEval--
1036
Mathematical ReasoningMATH--
882
Science Question AnsweringARC Challenge
Accuracy41.97
342
Mathematical ReasoningMathQA
Accuracy42.91
305
Mathematical ReasoningGSM8K
Math Score56.63
197
Science Question AnsweringARC Easy
Accuracy72.69
155
Word PredictionLAMBADA
Accuracy50.13
148
Massive Multitask Language UnderstandingMMLU
Accuracy55.26
117
Complex ReasoningBBH
Accuracy39.56
40
Showing 10 of 28 rows

Other info

Follow for update