Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression

About

Despite the recent progress of network pruning, directly applying it to the Internet of Things (IoT) applications still faces two challenges, i.e. the distribution divergence between end and cloud data and the missing of data label on end devices. One straightforward solution is to combine the unsupervised domain adaptation (UDA) technique and pruning. For example, the model is first pruned on the cloud and then transferred from cloud to end by UDA. However, such a naive combination faces high performance degradation. Hence this work proposes an Adversarial Double Masks based Pruning (ADMP) for such cross-domain compression. In ADMP, we construct a Knowledge Distillation framework not only to produce pseudo labels but also to provide a measurement of domain divergence as the output difference between the full-size teacher and the pruned student. Unlike existing mask-based pruning works, two adversarial masks, i.e. soft and hard masks, are adopted in ADMP. So ADMP can prune the model effectively while still allowing the model to extract strong domain-invariant features and robust classification boundaries. During training, the Alternating Direction Multiplier Method is used to overcome the binary constraint of {0,1}-masks. On Office31 and ImageCLEF-DA datasets, the proposed ADMP can prune 60% channels with only 0.2% and 0.3% average accuracy loss respectively. Compared with the state of art, we can achieve about 1.63x parameters reduction and 4.1% and 5.1% accuracy improvement.

Xiaoyu Feng, Zhuqing Yuan, Guijin Wang, Yongpan Liu• 2020

Related benchmarks

TaskDatasetResultRank
Unsupervised Domain AdaptationImageCLEF-DA
Average Accuracy86.3
104
Unsupervised Domain Adaptation ClassificationOffice-31 (test)
Accuracy (A->W)83.3
51
Showing 2 of 2 rows

Other info

Follow for update