Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search

About

The wide application of pre-trained models is driving the trend of once-for-all training in one-shot neural architecture search (NAS). However, training within a huge sample space damages the performance of individual subnets and requires much computation to search for an optimal model. In this paper, we present PreNAS, a search-free NAS approach that accentuates target models in one-shot training. Specifically, the sample space is dramatically reduced in advance by a zero-cost selector, and weight-sharing one-shot training is performed on the preferred architectures to alleviate update conflicts. Extensive experiments have demonstrated that PreNAS consistently outperforms state-of-the-art one-shot NAS competitors for both Vision Transformer and convolutional architectures, and importantly, enables instant specialization with zero search cost. Our code is available at https://github.com/tinyvision/PreNAS.

Haibin Wang, Ce Ge, Hesen Chen, Xiuyu Sun• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100--
302
Image ClassificationiNaturalist 2019
Top-1 Acc76.4
98
Image ClassificationFlowers
Accuracy97.6
83
Showing 3 of 3 rows

Other info

Follow for update