Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models

About

Fine-tuning is a crucial process for adapting large language models (LLMs) to diverse applications. In certain scenarios, such as multi-tenant serving, deploying multiple LLMs becomes necessary to meet complex demands. Recent studies suggest decomposing a fine-tuned LLM into a base model and corresponding delta weights, which are then compressed using low-rank or low-bit approaches to reduce costs. In this work, we observe that existing low-rank and low-bit compression methods can significantly harm the model performance for task-specific fine-tuned LLMs (e.g., WizardMath for math problems). Motivated by the long-tail distribution of singular values in the delta weights, we propose a delta quantization approach using mixed-precision. This method employs higher-bit representation for singular vectors corresponding to larger singular values. We evaluate our approach on various fine-tuned LLMs, including math LLMs, code LLMs, chat LLMs, and even VLMs. Experimental results demonstrate that our approach performs comparably to full fine-tuned LLMs, surpassing both low-rank and low-bit baselines by a considerable margin. Additionally, we show that our method is compatible with various backbone LLMs, such as Llama-2, Llama-3, and Mistral, highlighting its generalizability.

Bowen Ping, Shuo Wang, Hanqing Wang, Xu Han, Yuzhuang Xu, Yukun Yan, Yun Chen, Baobao Chang, Zhiyuan Liu, Maosong Sun• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@185
850
Mathematical ReasoningGSM8K (test)
Accuracy62.4
797
Code GenerationHumanEval (test)
Pass@156.71
444
Mathematical ReasoningMATH (test)
Overall Accuracy12.56
433
Visual Question AnsweringGQA
Accuracy62.8
374
Code GenerationMBPP (test)
Pass@168.3
276
Mathematical ReasoningAIME 2024
Accuracy30
251
Code GenerationMBPP
Pass@182.7
175
Code GenerationMBPP
Accuracy (%)86.5
146
Mathematical ReasoningMATH500
Accuracy (ACC)76.5
133
Showing 10 of 17 rows

Other info

Follow for update