Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Flux Attention: Context-Aware Hybrid Attention for Efficient LLMs Inference

About

The quadratic computational complexity of standard attention mechanisms presents a severe scalability bottleneck for LLMs in long-context scenarios. While hybrid attention mechanisms combining Full Attention (FA) and Sparse Attention (SA) offer a potential solution, existing methods typically rely on static allocation ratios that fail to accommodate the variable retrieval demands of different tasks. Furthermore, head-level dynamic sparsity often introduces severe computational load imbalance and synchronization long-tails, which hinder hardware acceleration during autoregressive decoding. To bridge this gap, we introduce Flux Attention, a context-aware framework that dynamically optimizes attention computation at the layer level. By integrating a lightweight Layer Router into frozen pretrained LLMs, the proposed method adaptively routes each layer to FA or SA based on the input context. This layer-wise routing preserves high-fidelity information retrieval while ensuring contiguous memory access, translating theoretical computational reductions into practical wall-clock speedups. As a parameter-efficient approach, our framework requires only 12 hours of training on 8$\times$A800 GPUs. Extensive experiments across multiple long-context and mathematical reasoning benchmarks demonstrate that Flux Attention achieves a superior trade-off between performance and inference speed compared with baseline models, with speed improvements of up to $2.8\times$ and $2.0\times$ in the prefill and decode stages.

Quantong Qiu, Zhiyi Hong, Yi Yang, Haitian Wang, Kebin Liu, Qingqing Dang, Juntao Li, Min Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench v2
Overall Score33.41
109
Long-context multi-task evaluationLongBench-e
Qasper45.25
24
Mathematical ReasoningMath GSM8K AIME24
Accuracy (GSM8K)46.9
24
Long-context UnderstandingRULER
Performance (8K Context)92.72
24
Showing 4 of 4 rows

Other info

GitHub

Follow for update