Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generative Adversarial Neural Architecture Search

About

Despite the empirical success of neural architecture search (NAS) in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to assess. In this paper, we propose Generative Adversarial NAS (GA-NAS) with theoretically provable convergence guarantees, promoting stability and reproducibility in neural architecture search. Inspired by importance sampling, GA-NAS iteratively fits a generator to previously discovered top architectures, thus increasingly focusing on important parts of a large search space. Furthermore, we propose an efficient adversarial learning approach, where the generator is trained by reinforcement learning based on rewards provided by a discriminator, thus being able to explore the search space without evaluating a large number of architectures. Extensive experiments show that GA-NAS beats the best published results under several cases on three public NAS benchmarks. In the meantime, GA-NAS can handle ad-hoc search constraints and search spaces. We show that GA-NAS can be used to improve already optimized baselines found by other NAS methods, including EfficientNet and ProxylessNAS, in terms of ImageNet accuracy or the number of parameters, in their original search space.

Seyed Saeed Changiz Rezaei, Fred X. Han, Di Niu, Mohammad Salameh, Keith Mills, Shuo Lian, Wei Lu, Shangling Jui• 2021

Related benchmarks

TaskDatasetResultRank
Neural Architecture SearchNAS-Bench-201 ImageNet-16-120 (test)
Accuracy46.8
86
Neural Architecture SearchNAS-Bench-201 CIFAR-10 (test)
Accuracy94.34
85
Neural Architecture SearchNAS-Bench-201 CIFAR-100 (test)
Accuracy73.28
78
Showing 3 of 3 rows

Other info

Follow for update