Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Arch-Graph: Acyclic Architecture Relation Predictor for Task-Transferable Neural Architecture Search

About

Neural Architecture Search (NAS) aims to find efficient models for multiple tasks. Beyond seeking solutions for a single task, there are surging interests in transferring network design knowledge across multiple tasks. In this line of research, effectively modeling task correlations is vital yet highly neglected. Therefore, we propose \textbf{Arch-Graph}, a transferable NAS method that predicts task-specific optimal architectures with respect to given task embeddings. It leverages correlations across multiple tasks by using their embeddings as a part of the predictor's input for fast adaptation. We also formulate NAS as an architecture relation graph prediction problem, with the relational graph constructed by treating candidate architectures as nodes and their pairwise relations as edges. To enforce some basic properties such as acyclicity in the relational graph, we add additional constraints to the optimization process, converting NAS into the problem of finding a Maximal Weighted Acyclic Subgraph (MWAS). Our algorithm then strives to eliminate cycles and only establish edges in the graph if the rank results can be trusted. Through MWAS, Arch-Graph can effectively rank candidate models for each task with only a small budget to finetune the predictor. With extensive experiments on TransNAS-Bench-101, we show Arch-Graph's transferability and high sample efficiency across numerous tasks, beating many NAS methods designed for both single-task and multi-task search. It is able to find top 0.16\% and 0.29\% architectures on average on two search spaces under the budget of only 50 models.

Minbin Huang, Zhijian Huang, Changlin Li, Xin Chen, Hang Xu, Zhenguo Li, Xiaodan Liang• 2022

Related benchmarks

TaskDatasetResultRank
Neural Architecture Search (Performance Prediction)NAS-Bench-201 (test)
Kendall's Tau0.67
18
Multi-task Neural Architecture SearchTransNAS-Bench-101 Macro level search space 1.0
Cls O Acc47.44
14
Object ClassificationTransNAS-Bench-101 Micro level search space
Accuracy45.81
13
Room Layout ReconstructionTransNAS-Bench-101 Micro level search space
L2 Loss60.08
13
Scene ClassificationTransNAS-Bench-101 Micro level search space
Accuracy54.9
13
Surface Normal PredictionTransNAS-Bench-101 Micro level search space
SSIM58.27
13
Semantic segmentationTransNAS-Bench-101 Micro level search space
mIoU25.73
13
AutoencoderTransNAS-Bench-101 Micro level search space
SSIM56.61
13
Neural Architecture SearchTransNAS-Bench macro level search space 101
Kendall's Tau0.61
7
JigsawTransNAS-Bench-101 Micro level search space
Accuracy0.9466
6
Showing 10 of 10 rows

Other info

Code

Follow for update