Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression

About

In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the input prompt. Inspired by these findings, we propose LongLLMLingua for prompt compression towards improving LLMs' perception of the key information to simultaneously address the three challenges. Our extensive evaluation across various long context scenarios demonstrates that LongLLMLingua not only enhances performance but also significantly reduces costs and latency. For instance, in the NaturalQuestions benchmark, LongLLMLingua boosts performance by up to 21.4% with around 4x fewer tokens in GPT-3.5-Turbo, leading to substantial cost savings. It achieves a 94.0% cost reduction in the LooGLE benchmark. Moreover, when compressing prompts of about 10k tokens at ratios of 2x-6x, LongLLMLingua can accelerate end-to-end latency by 1.4x-2.6x. Our code is available at https://aka.ms/LongLLMLingua.

Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy43.46
1362
Multi-hop Question AnsweringHotpotQA
F1 Score38.07
294
Long-context Language UnderstandingLongBench--
292
Multi-hop Question Answering2WikiMQA
F1 Score35.3
161
Question Answering2Wiki
F153.6
152
Multi-hop Question Answering2Wiki
Exact Match25.5
152
Long-context Language UnderstandingLongBench (test)
Average Score34.4
147
Long-context UnderstandingLongBench (test)
Avg Score35.5
136
Question AnsweringHotpotQA
F149.4
128
Question AnsweringBamboogle
EM20.3
120
Showing 10 of 102 rows
...

Other info

Code

Follow for update