Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DynamiQ: Accelerating Gradient Synchronization using Compressed Multi-hop All-reduce

About

Multi-hop all-reduce is the de facto backbone of large model training. As the training scale increases, the network often becomes a bottleneck, motivating reducing the volume of transmitted data. Accordingly, recent systems demonstrated significant acceleration of the training process using gradient quantization. However, these systems are not optimized for multi-hop aggregation, where entries are partially summed multiple times along their aggregation topology. This paper presents DynamiQ, a quantization framework that bridges the gap between quantization best practices and multi-hop aggregation. DynamiQ introduces novel techniques to better represent partial sums, co-designed with a decompress-accumulate-recompress fused kernel to facilitate fast execution. We extended PyTorch DDP to support DynamiQ over NCCL P2P, and across different LLMs, tasks, and scales, we demonstrate consistent improvement of up to 34.2% over the best among state-of-the-art methods such as Omni-Reduce, THC, and emerging standards such as MXFP4, MXFP6, and MXFP8. Further, DynamiQ is the only evaluated method that consistently reaches near-baseline accuracy (e.g., 99.9% of the BF16 baseline) and does so while significantly accelerating the training.

Wenchen Han, Shay Vargaftik, Michael Mitzenmacher, Ran Ben Basat• 2026

Related benchmarks

TaskDatasetResultRank
Chat Fine-tuningLLaMA Chat 1B
vNMSE0.0015
6
Chat Fine-tuningGemma 1B Chat
vNMSE0.0012
6
Masked Language ModelingBERT large
vNMSE0.0022
6
Massive Multitask Language UnderstandingMMLU LLaMA 1B
vNMSE9.60e-4
6
Multi-task Language UnderstandingMMLU
Accuracy73.04
5
Showing 5 of 5 rows

Other info

Follow for update