Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Isomorphic Pruning for Vision Models

About

Structured pruning reduces the computational overhead of deep neural networks by removing redundant sub-structures. However, assessing the relative importance of different sub-structures remains a significant challenge, particularly in advanced vision models featuring novel mechanisms and architectures like self-attention, depth-wise convolutions, or residual connections. These heterogeneous substructures usually exhibit diverged parameter scales, weight distributions, and computational topology, introducing considerable difficulty to importance comparison. To overcome this, we present Isomorphic Pruning, a simple approach that demonstrates effectiveness across a range of network architectures such as Vision Transformers and CNNs, and delivers competitive performance across different model sizes. Isomorphic Pruning originates from an observation that, when evaluated under a pre-defined importance criterion, heterogeneous sub-structures demonstrate significant divergence in their importance distribution, as opposed to isomorphic structures that present similar importance patterns. This inspires us to perform isolated ranking and comparison on different types of sub-structures for more reliable pruning. Our empirical results on ImageNet-1K demonstrate that Isomorphic Pruning surpasses several pruning baselines dedicatedly designed for Transformers or CNNs. For instance, we improve the accuracy of DeiT-Tiny from 74.52% to 77.50% by pruning an off-the-shelf DeiT-Base model. And for ConvNext-Tiny, we enhanced performance from 82.06% to 82.18%, while reducing the number of parameters and memory usage. Code is available at \url{https://github.com/VainF/Isomorphic-Pruning}.

Gongfan Fang, Xinyin Ma, Michael Bi Mi, Xinchao Wang• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU35.89
936
Image ClassificationImageNet-1k (val)--
512
Image ClassificationStanford Cars--
477
Image ClassificationCIFAR100
Accuracy74.1
331
Image ClassificationiNaturalist 2019
Top-1 Acc63.3
98
Image ClassificationCUB-200
Accuracy51.71
92
Image ClassificationOxford Flowers
Top-1 Accuracy75.77
78
Image ClassificationImageNet (val)
Top-1 Accuracy82.41
76
Image ClassificationImageNet-1K
Top-1 Acc75.85
75
Image GenerationImageNet-1K
FID18.68
42
Showing 10 of 15 rows

Other info

Follow for update