LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models
About
Large language models (LLMs) face significant challenges in handling long-context tasks because of their limited effective context window size during pretraining, which restricts their ability to generalize over extended sequences. Meanwhile, extending the context window in LLMs through post-pretraining is highly resource-intensive. To address this, we introduce LongRecipe, an efficient training strategy for extending the context window of LLMs, including impactful token analysis, position index transformation, and training optimization strategies. It simulates long-sequence inputs while maintaining training efficiency and significantly improves the model's understanding of long-range dependencies. Experiments on three types of LLMs show that LongRecipe can utilize long sequences while requiring only 30% of the target context window size, and reduces computational training resource over 85% compared to full sequence training. Furthermore, LongRecipe also preserves the original LLM's capabilities in general tasks. Ultimately, we can extend the effective context window of open-source LLMs from 8k to 128k, achieving performance close to GPT-4 with just one day of dedicated training using a single GPU with 80G memory. Our code is released at https://github.com/zhiyuanhubj/LongRecipe.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K | Accuracy58.7 | 983 | |
| Code Generation | HumanEval | Pass@129.3 | 850 | |
| Language Understanding | MMLU | Accuracy65.9 | 756 | |
| Long-context Understanding | LongBench | Overall Average Score26.9 | 115 | |
| Long-context Understanding | RULER | Score76 | 45 | |
| Multi-needle retrieval | NIAH (M) | Accuracy (NIAH M)82.6 | 35 |