Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DRPruning: Efficient Large Language Model Pruning through Distributionally Robust Optimization

About

Large language models (LLMs) deliver impressive results but face challenges from increasing model sizes and computational costs. Structured pruning reduces model size and speeds up inference but often causes uneven degradation across domains, leading to biased performance. To address this, we propose DRPruning, a method that dynamically adjusts the data distribution during training to restore balanced performance across heterogeneous and multi-tasking data. Experiments in monolingual and multilingual settings show that DRPruning surpasses similarly sized models in both pruning and continued pretraining over perplexity, downstream tasks, and instruction tuning. Further analysis demonstrates the robustness of DRPruning towards various domains and distribution shifts. Furthermore, DRPruning can determine optimal reference losses and data ratios automatically, suggesting potential for broader applications. Code and scripts are available at https://github.com/hexuandeng/DRPruning.

Hexuan Deng, Wenxiang Jiao, Xuebo Liu, Jing Li, Min Zhang, Zhaopeng Tu• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingSlimPajama (test)
PPL (CommonCrawl)44
23
Language ModelingSlimPajama
Perplexity (PPL)7.4
12
Downstream Task Evaluation15 Downstream Tasks summary--
7
Downstream Task PerformanceMultilingual Downstream Tasks (test)
EN61.67
6
Machine TranslationInstruction Tuning Machine Translation Analysis (test)
EN-ZH27.4
2
Showing 5 of 5 rows

Other info

Code

Follow for update