Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pruning Large Language Models to Intra-module Low-rank Architecture with Transitional Activations

About

Structured pruning fundamentally reduces computational and memory overheads of large language models (LLMs) and offers a feasible solution for end-side LLM deployment. Structurally pruned models remain dense and high-precision, highly compatible with further tuning and compression. However, as the coarse-grained structured pruning poses large damage to the highly interconnected model, achieving a high compression ratio for scaled-up LLMs remains a challenge. In this paper, we introduce a task-agnostic structured pruning approach coupled with a compact Transformer architecture design. The proposed approach, named TransAct, reduces transitional activations inside multi-head attention (MHA) and multi-layer perceptron (MLP) modules, while preserving the inter-module activations that are sensitive to perturbations. Hence, the LLM is pruned into an intra-module low-rank architecture, significantly reducing weights, KV Cache and attention computation. TransAct is implemented on the LLaMA model and evaluated on downstream benchmarks. Results verify the optimality of our approach at high compression with respect to both efficiency and performance. Further, ablation studies reveal the strength of activation-guided iterative pruning and provide experimental analysis on the redundancy of MHA and MLP modules.

Bowen Shen, Zheng Lin, Daren Zha, Wei Liu, Jian Luan, Bin Wang, Weiping Wang• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy71.2
1460
Commonsense ReasoningWinoGrande
Accuracy65.5
776
Question AnsweringARC Challenge
Accuracy38.9
749
Question AnsweringARC Easy
Normalized Acc65.5
385
Physical Commonsense ReasoningPIQA
Accuracy76.9
329
Boolean Question AnsweringBoolQ
Accuracy66.3
307
Question AnsweringOBQA
Accuracy38.2
276
Question AnsweringSciQ
Accuracy91
226
Question AnsweringTriviaQA
Accuracy33.9
210
Logical reasoningLogiQA
Accuracy27.9
98
Showing 10 of 17 rows

Other info

Code

Follow for update