Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

YaRN: Efficient Context Window Extension of Large Language Models

About

Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. Code is available at https://github.com/jquesnelle/yarn

Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole• 2023

Related benchmarks

TaskDatasetResultRank
Long-context Language UnderstandingLongBench
M-Avg43.61
219
Long-context Language UnderstandingLongBench (test)
Average Score13.07
133
Language ModelingPG-19 (test)
Perplexity11.06
106
Language ModelingPG-19
Perplexity8.97
96
Long-context Question AnsweringLongBench (test)
HotpotQA53.2
59
Long-context UnderstandingLongBench v2--
37
Language ModelingArxiv Proof-pile
Perplexity2.51
32
Long-context Language UnderstandingL-Eval (test)
Coursera55.96
26
Long-context Language UnderstandingL-Eval
Coursera56.4
26
Long-context Language UnderstandingLongBench 1.0 (test)
MultiNews6.31
21
Showing 10 of 33 rows

Other info

Follow for update