Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Training-Free Activation Sparsity in Large Language Models

About

Activation sparsity can enable practical inference speedups in large language models (LLMs) by reducing the compute and memory-movement required for matrix multiplications during the forward pass. However, existing methods face limitations that inhibit widespread adoption. Some approaches are tailored towards older models with ReLU-based sparsity, while others require extensive continued pre-training on up to hundreds of billions of tokens. This paper describes TEAL, a simple training-free method that applies magnitude-based activation sparsity to hidden states throughout the entire model. TEAL achieves 40-50% model-wide sparsity with minimal performance degradation across Llama-2, Llama-3, and Mistral families, with sizes varying from 7B to 70B. We improve existing sparse kernels and demonstrate wall-clock decoding speed-ups of up to 1.53$\times$ and 1.8$\times$ at 40% and 50% model-wide sparsity. TEAL is compatible with weight quantization, enabling further efficiency gains.

James Liu, Pragaash Ponnusamy, Tianle Cai, Han Guo, Yoon Kim, Ben Athiwaratkun• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity5.15
2839
Medical Question AnsweringMedMCQA
Accuracy52.95
346
Long-context Language UnderstandingLongBench--
292
General ReasoningMMLU
MMLU Accuracy76.63
156
Question AnsweringCommonsenseQA
Accuracy74.77
148
Long-context UnderstandingLongBench
Overall Average Score30.54
115
CodeHumanEval
HumanEval Accuracy46.95
79
Question AnsweringTruthfulQA
Accuracy57.08
73
Language ModelingWikitext (test)
Perplexity5.52
62
Question AnsweringMMLU
Accuracy68.78
46
Showing 10 of 17 rows

Other info

Follow for update