Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference

About

As large language models (LLMs) grow in parameter size and context length, computation precision has been reduced from 16-bit to 4-bit to improve inference efficiency. However, this reduction causes accuracy degradation due to activation outliers. Rotation-based INT4 methods address this via matrix calibration, but they introduce multi-hour overheads and leave key computations in full precision. Microscaling (MX) floating-point (FP) formats offer fine-grained representation with a shared scale, enabling fully quantized matrix multiplications through direct casting without calibration. However, existing research shows unsatisfactory empirical results for MXFP4 inference, and the robustness of MX formats remains largely unexplored. In this work, we uncover the fundamental tradeoffs of the MX format: while it effectively suppresses activation outliers, it does so at the cost of increased group-wise asymmetry. To address this, we propose AMXFP4, a 4-bit asymmetric FP format that handles both issues using asymmetric shared scales, without requiring calibration. Our custom MAC engine adds negligible hardware cost while improving accuracy: AMXFP4 outperforms MXFP4 by 3% on VQA and exceeds rotation-based methods by 1.6% on CSQA. It also surpasses recently deployed commercial MXFP4 variants. Code: https://github.com/aiha-lab/MX-QLLM

Janghwan Lee, Jiwoong Park, Jinseok Kim, Yongjik Kim, Jungju Oh, Jinwook Oh, Jungwook Choi• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL4.35
1541
Visual Question AnsweringChartQA
Accuracy49.48
239
Multiple-choice Question AnsweringMMLU
Accuracy52.8
148
Visual Question AnsweringDocVQA
Accuracy66.98
103
Commonsense Question AnsweringCSQA
Accuracy64.9
44
OCR and Text-based Visual Question AnsweringOCRBench
Accuracy43.9
19
General LLM EvaluationMT-Bench
Writing8.2
4
Multi-modal Visual Question AnsweringVQA-T
Accuracy59.13
4
Showing 8 of 8 rows

Other info

Code

Follow for update