Per-Axis Weight Deltas for Frequent Model Updates
About
Serving many task-specialized LLM variants is often limited by the large size of fine-tuned checkpoints and the resulting cold-start latency. Since fine-tuned weights differ from their base model by relatively small structured residuals, a natural approach is to represent them as compressed deltas. We propose a simple 1-bit delta scheme that stores only the sign of the weight difference together with lightweight per-axis (row/column) FP16 scaling factors, learned from a small calibration set. This design preserves the compactness of 1-bit deltas while more accurately capturing variation across weight dimensions, leading to improved reconstruction quality over scalar alternatives. From a systems perspective, a streamlined loader that transfers packed deltas in a single operation per module reduces cold-start latency and storage overhead, with artifacts several times smaller than a full FP16 checkpoint. The method is drop-in, requires minimal calibration data, and maintains inference efficiency by avoiding dense reconstruction. Our experimental setup and source code are available at https://github.com/kuiumdjiev/Per-Axis-Weight-Deltas-for-Frequent-Model-Updates.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy62.07 | 1460 | |
| Physical Commonsense Reasoning | PIQA | Accuracy80.85 | 329 | |
| Question Answering | ARC-E | Accuracy84.34 | 242 | |
| Commonsense Reasoning | WinoGrande | Accuracy76.24 | 231 | |
| Question Answering | ARC-C | Accuracy0.587 | 68 | |
| Model Compression Analysis | Model Checkpoints Llama-3.1-8B, Qwen3-14B, Phi-4 | Model Size (MB)2.98e+3 | 6 |