Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Per-Axis Weight Deltas for Frequent Model Updates

About

Serving many task-specialized LLM variants is often limited by the large size of fine-tuned checkpoints and the resulting cold-start latency. Since fine-tuned weights differ from their base model by relatively small structured residuals, a natural approach is to represent them as compressed deltas. We propose a simple 1-bit delta scheme that stores only the sign of the weight difference together with lightweight per-axis (row/column) FP16 scaling factors, learned from a small calibration set. This design preserves the compactness of 1-bit deltas while more accurately capturing variation across weight dimensions, leading to improved reconstruction quality over scalar alternatives. From a systems perspective, a streamlined loader that transfers packed deltas in a single operation per module reduces cold-start latency and storage overhead, with artifacts several times smaller than a full FP16 checkpoint. The method is drop-in, requires minimal calibration data, and maintains inference efficiency by avoiding dense reconstruction. Our experimental setup and source code are available at https://github.com/kuiumdjiev/Per-Axis-Weight-Deltas-for-Frequent-Model-Updates.

Stefan Kuyumdzhiev, Radostin Cholakov• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy62.07
1460
Physical Commonsense ReasoningPIQA
Accuracy80.85
329
Question AnsweringARC-E
Accuracy84.34
242
Commonsense ReasoningWinoGrande
Accuracy76.24
231
Question AnsweringARC-C
Accuracy0.587
68
Model Compression AnalysisModel Checkpoints Llama-3.1-8B, Qwen3-14B, Phi-4
Model Size (MB)2.98e+3
6
Showing 6 of 6 rows

Other info

Follow for update