Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference

About

As large language models (LLMs) grow in parameter size and context length, computation precision has been reduced from 16-bit to 4-bit to improve inference efficiency. However, this reduction causes accuracy degradation due to activation outliers. Rotation-based INT4 methods address this via matrix calibration, but they introduce multi-hour overheads and leave key computations in full precision. Microscaling (MX) floating-point (FP) formats offer fine-grained representation with a shared scale, enabling fully quantized matrix multiplications through direct casting without calibration. However, existing research shows unsatisfactory empirical results for MXFP4 inference, and the robustness of MX formats remains largely unexplored. In this work, we uncover the fundamental tradeoffs of the MX format: while it effectively suppresses activation outliers, it does so at the cost of increased group-wise asymmetry. To address this, we propose AMXFP4, a 4-bit asymmetric FP format that handles both issues using asymmetric shared scales, without requiring calibration. Our custom MAC engine adds negligible hardware cost while improving accuracy: AMXFP4 outperforms MXFP4 by 3% on VQA and exceeds rotation-based methods by 1.6% on CSQA. It also surpasses recently deployed commercial MXFP4 variants. Code: https://github.com/aiha-lab/MX-QLLM

Janghwan Lee, Jiwoong Park, Jinseok Kim, Yongjik Kim, Jungju Oh, Jinwook Oh, Jungwook Choi• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity5.85
2839
Language ModelingWikiText-2 (test)
PPL4.35
1949
Visual Question AnsweringChartQA
Accuracy49.48
371
Multi-task Language UnderstandingMMLU
Accuracy79.96
321
Multiple-choice Question AnsweringMMLU
Accuracy52.8
185
Visual Question AnsweringDocVQA
Accuracy66.98
162
Massive Multitask Language UnderstandingMMLU
Accuracy53.11
117
Commonsense Question AnsweringCSQA
Accuracy64.9
44
OCR and Text-based Visual Question AnsweringOCRBench
Accuracy43.9
19
Language UnderstandingBenchmarks (ARC-C, BoolQ, Lambada, PIQA, Winogrande) Zero-shot
ARC-C Accuracy51.54
16
Showing 10 of 13 rows

Other info

Code

Follow for update