Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Microscaling Data Formats for Deep Learning

About

Narrow bit-width data formats are key to reducing the computational and storage costs of modern deep learning applications. This paper evaluates Microscaling (MX) data formats that combine a per-block scaling factor with narrow floating-point and integer types for individual elements. MX formats balance the competing needs of hardware efficiency, model accuracy, and user friction. Empirical results on over two dozen benchmarks demonstrate practicality of MX data formats as a drop-in replacement for baseline FP32 for AI inference and training with low user friction. We also show the first instance of training generative language models at sub-8-bit weights, activations, and gradients with minimal accuracy loss and no modifications to the training recipe.

Bita Darvish Rouhani, Ritchie Zhao, Ankit More, Mathew Hall, Alireza Khodamoradi, Summer Deng, Dhruv Choudhary, Marius Cornea, Eric Dellinger, Kristof Denolf, Stosic Dusan, Venmugil Elango, Maximilian Golub, Alexander Heinecke, Phil James-Roxby, Dharmesh Jani, Gaurav Kolhe, Martin Langhammer, Ada Li, Levi Melnick, Maral Mesmakhosroshahi, Andres Rodriguez, Michael Schulte, Rasoul Shafipour, Lei Shao, Michael Siu, Pradeep Dubey, Paulius Micikevicius, Maxim Naumov, Colin Verrilli, Ralph Wittig, Doug Burger, Eric Chung• 2023

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy66.16
756
Language ModelingWikiText
PPL9.75
479
Language ModelingC4
Perplexity15.49
321
Long-context language modelingLongBench
Single-Document QA42.77
44
Language ModelingWikiText-103
PPL3.69
42
Zero-shot Language ModelingLM Evaluation Harness 0-shot
WG76.32
30
Chat Fine-tuningLLaMA Chat 1B
vNMSE0.0032
6
Chat Fine-tuningGemma 1B Chat
vNMSE0.0031
6
Masked Language ModelingBERT large
vNMSE0.0059
6
Massive Multitask Language UnderstandingMMLU LLaMA 1B
vNMSE0.003
6
Showing 10 of 12 rows

Other info

Follow for update