Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation

About

While post-training compression techniques effectively reduce the memory footprint, latency, and power consumption of Large Language Models (LLMs), they often result in noticeable accuracy degradation and remain limited by hardware and kernel constraints that restrict supported compression formats - ultimately reducing flexibility across a wide range of deployment scenarios. In this work, we propose EoRA - a novel $\textbf{fine-tuning-free}$ method that augments compressed LLMs with low-rank matrices, allowing users to rapidly enhance task-specific performance and freely balance the trade-off between accuracy and computational overhead beyond the constraints of compression formats. EoRA consistently outperforms prior fine-tuning-free low rank methods in recovering the accuracy of compressed LLMs, achieving notable accuracy improvements (e.g., $\mathbf{10.84\%}$ on ARC-Challenge, $\mathbf{6.74\%}$ on MathQA, and $\mathbf{11.45\%}$ on GSM8K for LLaMA3-8B compressed to 3-bit). We also introduce an optimized CUDA kernel, accelerating inference by up to 1.4x and reducing memory overhead through quantizing EoRA. Overall, EoRA offers a prompt solution for improving the accuracy of compressed models under varying user requirements, enabling more efficient and flexible deployment of LLMs. Code is available at https://github.com/NVlabs/EoRA.

Shih-Yang Liu, Maksim Khadkevich, Nai Chit Fung, Charbel Sakr, Chao-Han Huck Yang, Chien-Yi Wang, Saurav Muralidharan, Hongxu Yin, Kwang-Ting Cheng, Jan Kautz, Yu-Chiang Frank Wang, Pavlo Molchanov, Min-Hung Chen• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity6.89
2839
Mathematical ReasoningMathQA
Accuracy56.04
305
Commonsense ReasoningARC Challenge
Accuracy37.54
190
Math ReasoningGSM8K
Accuracy30.7
187
Commonsense ReasoningARC-C
Accuracy55.46
172
Language ModelingWikiText2
Perplexity5.03
162
Question AnsweringMathQA (test)
Accuracy37.21
41
SummarizationCNN/DailyMail (test)
ROUGE-L18.12
33
Showing 8 of 8 rows

Other info

Follow for update