Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

About

Large language models (LLMs) have been applied in various applications due to their astonishing capabilities. With advancements in technologies such as chain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fed to LLMs are becoming increasingly lengthy, even exceeding tens of thousands of tokens. To accelerate model inference and reduce cost, this paper presents LLMLingua, a coarse-to-fine prompt compression method that involves a budget controller to maintain semantic integrity under high compression ratios, a token-level iterative compression algorithm to better model the interdependence between compressed contents, and an instruction tuning based method for distribution alignment between language models. We conduct experiments and analysis over four datasets from different scenarios, i.e., GSM8K, BBH, ShareGPT, and Arxiv-March23; showing that the proposed approach yields state-of-the-art performance and allows for up to 20x compression with little performance loss. Our code is available at https://aka.ms/LLMLingua.

Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy72.86
1362
Mathematical ReasoningGSM8K (test)
Accuracy49.96
770
ReasoningBBH
Accuracy54.98
672
Mathematical ReasoningSVAMP
Accuracy27.5
403
Multi-hop Question AnsweringHotpotQA
F1 Score31.54
294
Long-context Language UnderstandingLongBench--
292
Arithmetic ReasoningMultiArith
Accuracy22.33
229
Long-context UnderstandingLongBench (test)
Avg Score22.7
136
Mathematical ReasoningGSM8K
EM22.74
123
Arithmetic ReasoningADDSUB
Accuracy22.28
123
Showing 10 of 62 rows

Other info

Follow for update