LLoCO: Learning Long Contexts Offline
About
Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose LLoCO, a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA. Our method enables an LLM to create a concise representation of the original context and efficiently retrieve relevant information to answer questions accurately. Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning while using $30\times$ fewer tokens during inference. LLoCO achieves up to $7.62\times$ speed-up during inference and $11.52\times$ higher throughput during finetuning, substantially reduces the cost of long document question answering. This makes it a promising solution for efficient long context processing. Our code is publicly available on https://github.com/jeffreysijuntan/lloco.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | NarrativeQA (test) | -- | 68 | |
| Long-context Language Understanding | RULER 32k context length | VT Score0.00e+0 | 33 | |
| Question Answering | QASPER (test) | F1 Score (Match)18.2 | 27 | |
| Long-context Language Understanding | RULER 16k context length | -- | 16 | |
| Multiple-choice Question Answering | LongBench v2 (val) | Overall Accuracy28.2 | 15 | |
| Long-context Language Understanding | RULER 4k context length | VT Score0.00e+0 | 10 | |
| Document Summarization | QMSum | G-mean12.99 | 9 | |
| Document Summarization | GovReport | G-mean5.73 | 9 | |
| Long-context Language Understanding | RULER 64k context length | QA Score9 | 9 | |
| Question Answering | TriviaQA | F1 Score63.21 | 8 |