Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient LLMs with AMP: Attention Heads and MLP Pruning

About

Deep learning drives a new wave in computing systems and triggers the automation of increasingly complex problems. In particular, Large Language Models (LLMs) have significantly advanced cognitive tasks, often matching or even surpassing human-level performance. However, their extensive parameters result in high computational costs and slow inference, posing challenges for deployment in resource-limited settings. Among the strategies to overcome the aforementioned challenges, pruning emerges as a successful mechanism since it reduces model size while maintaining predictive ability. In this paper, we introduce AMP: Attention Heads and MLP Pruning, a novel structured pruning method that efficiently compresses LLMs by removing less critical structures within Multi-Head Attention (MHA) and Multilayer Perceptron (MLP). By projecting the input data onto weights, AMP assesses structural importance and overcomes the limitations of existing techniques, which often fall short in flexibility or efficiency. In particular, AMP surpasses the current state-of-the-art on commonsense reasoning tasks by up to 1.49 percentage points, achieving a 30% pruning ratio with minimal impact on zero-shot task performance. Moreover, AMP also improves inference speeds, making it well-suited for deployment in resource-constrained environments. We confirm the flexibility of AMP on different families of LLMs, including LLaMA and Phi.

Leandro Giusti Mugnaini, Bruno Lopes Yamamoto, Lucas Lauton de Alcantara, Victor Zacarias, Edson Bollis, Lucas Pellicer, Anna Helena Reali Costa, Artur Jordao• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy44.2
749
Question AnsweringARC Easy
Accuracy68.18
386
Question AnsweringWinoGrande (WG)
Accuracy61.56
98
Question AnsweringPIQA
Accuracy76.39
83
Multiple-choice Question AnsweringHellaSwag
Accuracy69.22
59
Question AnsweringWinoGrande, HellaSwag, ARC-e, ARC-c, PIQA Average
Avg Accuracy63.48
35
Showing 6 of 6 rows

Other info

Follow for update