Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LARV: Data-Free Layer-wise Adaptive Rescaling Veneer for Model Merging

About

Model merging aims to combine multiple fine-tuned models into a single multi-task model without access to training data. Existing task-vector merging methods such as TIES, TSV-M, and Iso-C/CTS differ in their aggregation rules but treat all layers nearly uniformly. This assumption overlooks the strong layer-wise heterogeneity in large vision transformers, where shallow layers are sensitive to interference while deeper layers encode stable task-specific features. We introduce LARV, a training-free, data-free, merger-agnostic Layer-wise Adaptive Rescaling Veneer that plugs into any task-vector merger and assigns a per-layer scale to each task vector before aggregation, and show it consistently boosts diverse merging rules. LARV adaptively suppresses shallow-layer interference and amplifies deeper-layer alignment using a simple deterministic schedule, requiring no retraining or modification to existing mergers. To our knowledge, this is the first work to perform layer-aware scaling for task-vector merging. LARV computes simple data-free layer proxies and turns them into scales through a lightweight rule; we study several instantiations within one framework (e.g., tiered two/three-level scaling with fixed values, or continuous mappings) and show that tiered choices offer the best robustness, while continuous mappings remain an ablation. LARV is orthogonal to the base merger and adds negligible cost. On FusionBench with Vision Transformers, LARV consistently improves all task-vector baselines across 8/14/20-task settings; for example, Iso-C + LARV reaches 85.9% on ViT-B/32, 89.2% on ViT-B/16, and 92.6% on ViT-L/14. Layerwise analysis and corruption tests further indicate that LARV suppresses shallow-layer interference while modestly amplifying deeper, task-stable features, turning model merging into a robust, layer-aware procedure rather than a uniform one.

Xinyu Wang, Ke Deng, Fei Dou, Jinbo Bi, Jin Lu• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationEuroSAT
Accuracy97.8
497
Image ClassificationStanford Cars
Accuracy78.4
477
Image ClassificationSUN397
Accuracy81.4
425
Image ClassificationDTD
Accuracy83.6
419
Image ClassificationSVHN
Accuracy95.2
359
ClassificationCars
Accuracy93
314
Image ClassificationGTSRB
Accuracy97.8
291
Image ClassificationRESISC45
Accuracy96.4
263
Image ClassificationMNIST
Accuracy86.2
263
Image ClassificationSUN397
Accuracy74.4
246
Showing 10 of 15 rows

Other info

Follow for update