Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CATE: Computation-aware Neural Architecture Encoding with Transformers

About

Recent works (White et al., 2020a; Yan et al., 2020) demonstrate the importance of architecture encodings in Neural Architecture Search (NAS). These encodings encode either structure or computation information of the neural architectures. Compared to structure-aware encodings, computation-aware encodings map architectures with similar accuracies to the same region, which improves the downstream architecture search performance (Zhang et al., 2019; White et al., 2020a). In this work, we introduce a Computation-Aware Transformer-based Encoding method called CATE. Different from existing computation-aware encodings based on fixed transformation (e.g. path encoding), CATE employs a pairwise pre-training scheme to learn computation-aware encodings using Transformers with cross-attention. Such learned encodings contain dense and contextualized computation information of neural architectures. We compare CATE with eleven encodings under three major encoding-dependent NAS subroutines in both small and large search spaces. Our experiments show that CATE is beneficial to the downstream search, especially in the large search space. Moreover, the outside search space experiment demonstrates its superior generalization ability beyond the search space on which it was trained. Our code is available at: https://github.com/MSU-MLSys-Lab/CATE.

Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)--
798
Neural Architecture SearchNAS-Bench-101
Final Test Error0.0588
2
Neural Architecture SearchNAS-Bench-301
Final Test Error (%)5.28
2
Showing 3 of 3 rows

Other info

Code

Follow for update