Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

About

Large language models (LLMs) have been applied in various applications due to their astonishing capabilities. With advancements in technologies such as chain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fed to LLMs are becoming increasingly lengthy, even exceeding tens of thousands of tokens. To accelerate model inference and reduce cost, this paper presents LLMLingua, a coarse-to-fine prompt compression method that involves a budget controller to maintain semantic integrity under high compression ratios, a token-level iterative compression algorithm to better model the interdependence between compressed contents, and an instruction tuning based method for distribution alignment between language models. We conduct experiments and analysis over four datasets from different scenarios, i.e., GSM8K, BBH, ShareGPT, and Arxiv-March23; showing that the proposed approach yields state-of-the-art performance and allows for up to 20x compression with little performance loss. Our code is available at https://aka.ms/LLMLingua.

Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy72.86
983
Mathematical ReasoningGSM8K (test)
Accuracy49.96
751
ReasoningBBH
Accuracy54.98
507
Mathematical ReasoningSVAMP
Accuracy27.5
368
Multi-hop Question AnsweringHotpotQA
F1 Score31.54
221
Arithmetic ReasoningMultiArith
Accuracy22.33
181
Long-context UnderstandingLongBench
Overall Average Score33.31
115
Mathematical ReasoningGSM8K
EM22.74
115
Long-context UnderstandingLongBench (test)
Avg Score22.7
80
Arithmetic ReasoningADDSUB
Accuracy22.28
76
Showing 10 of 50 rows

Other info

Follow for update