Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Expressive Power of Tuning Only the Normalization Layers

About

Feature normalization transforms such as Batch and Layer-Normalization have become indispensable ingredients of state-of-the-art deep neural networks. Recent studies on fine-tuning large pretrained models indicate that just tuning the parameters of these affine transforms can achieve high accuracy for downstream tasks. These findings open the questions about the expressive power of tuning the normalization layers of frozen networks. In this work, we take the first step towards this question and show that for random ReLU networks, fine-tuning only its normalization layers can reconstruct any target network that is $O(\sqrt{\text{width}})$ times smaller. We show that this holds even for randomly sparsified networks, under sufficient overparameterization, in agreement with prior empirical work.

Angeliki Giannou, Shashank Rajput, Dimitris Papailiopoulos• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU47.89
2731
Oriented Object DetectionDOTA v1.0 (test)--
378
Image ClassificationFlowers102 (test)
Accuracy99.5284
68
Oriented Object DetectionSTAR (test)
AP33.13
60
Rotated Object DetectionDOTA 1.0 (test)
mAP75.82
46
Object DetectionPascal VOC (test)
mAP85.5
18
Instance SegmentationCOCO
AP Mask43.5
15
Image ClassificationAverage (Flowers102, OxfordPets, VOC2007) (test)
Top-1 Accuracy93.4266
10
Instance SegmentationCOCO
APMask43.5
10
Object DetectionCOCO
AP Box50.1
10
Showing 10 of 11 rows

Other info

Follow for update