Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Training-Free Long-Context Scaling of Large Language Models

About

The ability of Large Language Models (LLMs) to process and generate coherent text is markedly weakened when the number of input tokens exceeds their pretraining length. Given the expensive overhead of finetuning large-scale models with longer sequences, we propose Dual Chunk Attention (DCA), which enables Llama2 70B to support context windows of more than 100k tokens without continual training. By decomposing the attention computation for long sequences into chunk-based modules, DCA manages to effectively capture the relative positional information of tokens within the same chunk (Intra-Chunk) and across distinct chunks (Inter-Chunk), as well as integrates seamlessly with Flash Attention. In addition to its impressive extrapolation capability, DCA achieves performance on practical long-context tasks that is comparable to or even better than that of finetuned models. When compared with proprietary models, our training-free 70B model attains 94% of the performance of gpt-3.5-16k, indicating it is a viable open-source alternative. All code and data used in this work are released at \url{https://github.com/HKUNLP/ChunkLlama}.

Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong• 2024

Related benchmarks

TaskDatasetResultRank
Long-context Language UnderstandingLongBench
M-Avg44.44
292
Language ModelingPG-19
Perplexity9.07
160
Language ModelingPG-19 (test)
Perplexity11.27
110
Long-context Language UnderstandingLongBench 1.0 (test)
MultiNews24.78
61
Long-context Language UnderstandingL-Eval (test)
Coursera56.24
26
Long-context Language UnderstandingL-Eval
Coursera54.36
26
Fact chaining & relational reasoningLong Context Benchmarks
Accuracy (8k Context)52.8
21
Multi-round co-reference resolutionLong Context Benchmarks
Score (8k Context)35.7
21
Synthetic recallLong Context Benchmarks
Synthetic Recall (8k context)99.2
21
Passage re-rankingLong Context Benchmarks
Performance (8k Context)47.5
21
Showing 10 of 16 rows

Other info

Follow for update