Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HOT: Hadamard-based Optimized Training

About

It has become increasingly important to optimize backpropagation to reduce memory usage and computational overhead. Achieving this goal is highly challenging, as multiple objectives must be considered jointly while maintaining training quality. In this paper, we focus on matrix multiplication, which accounts for the largest portion of training costs, and analyze its backpropagation in detail to identify lightweight techniques that offer the best benefits. Based on this analysis, we introduce a novel method, Hadamard-based Optimized Training (HOT). In this approach, we apply Hadamard-based optimizations, such as Hadamard quantization and Hadamard low-rank approximation, selectively and with awareness of the suitability of each optimization for different backward paths. Additionally, we introduce two enhancements: activation buffer compression and layer-wise quantizer selection. Our extensive analysis shows that HOT achieves up to 75% memory savings and a 2.6 times acceleration on real GPUs, with negligible accuracy loss compared to FP32 precision.

Seonggon Kim, Juncheol Shin, Seung-taek Woo, Eunhyeok Park• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR10 (test)
Accuracy95.01
585
Semantic segmentationCityscapes
mIoU71.72
578
Image ClassificationCIFAR100 (test)
Accuracy76.95
206
Image ClassificationImageNet-100 (val)
Top-1 Accuracy86.7
95
ClassificationCIFAR100
Accuracy92.99
66
Image ClassificationImageNet 1k (train)
Top-1 Accuracy69.4
58
ClassificationCIFAR10
Top-1 Accuracy98.6
38
Language ModelingAlpaca
Perplexity3.29
31
Object DetectionVOC 2007
mAP85.1
23
Semantic segmentationVOC 2012
mIoU79.1
18
Showing 10 of 12 rows

Other info

Code

Follow for update