Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning

About

As more deep learning models are being applied in real-world applications, there is a growing need for modeling and learning the representations of neural networks themselves. An efficient representation can be used to predict target attributes of networks without the need for actual training and deployment procedures, facilitating efficient network deployment and design. Recently, inspired by the success of Transformer, some Transformer-based representation learning frameworks have been proposed and achieved promising performance in handling cell-structured models. However, graph neural network (GNN) based approaches still dominate the field of learning representation for the entire network. In this paper, we revisit Transformer and compare it with GNN to analyse their different architecture characteristics. We then propose a modified Transformer-based universal neural network representation learning model NAR-Former V2. It can learn efficient representations from both cell-structured networks and entire networks. Specifically, we first take the network as a graph and design a straightforward tokenizer to encode the network into a sequence. Then, we incorporate the inductive representation learning capability of GNN into Transformer, enabling Transformer to generalize better when encountering unseen architecture. Additionally, we introduce a series of simple yet effective modifications to enhance the ability of the Transformer in learning representation from graph structures. Our proposed method surpasses the GNN-based method NNLP by a significant margin in latency estimation on the NNLQP dataset. Furthermore, regarding accuracy prediction on the NASBench101 and NASBench201 datasets, our method achieves highly comparable performance to other state-of-the-art methods.

Yun Yi, Haokui Zhang, Rong Xiao, Nannan Wang, Xiaoyu Wang• 2023

Related benchmarks

TaskDatasetResultRank
Accuracy PredictionNAS-Bench-101 1.0
Kendall's Tau0.861
46
Latency PredictionNNLQP (Same Distribution)
MAPE1.18
44
Accuracy PredictionNAS-Bench-201 8 (whole dataset)
Kendall's Tau0.888
36
Latency PredictionNNLQ in-domain v1 (test)
MAPE (Average)1.8
33
Accuracy PredictionNAS-Bench-101 (test)
Kendall's Tau0.861
18
Accuracy PredictionNAS-Bench-101 100 samples (test)
Kendall's Tau0.802
10
Accuracy PredictionNAS-Bench-201 5% samples (train)
Kendall's Tau0.874
8
Latency PredictionNNLQ Out-of-domain GoogleNet
MAPE (avg)6.61
8
Latency PredictionNNLQ Out-of-domain MobileNetV3
MAPE (%)9.06
8
Latency PredictionNNLQ Out-of-domain ResNet
MAPE (avg)6.8
8
Showing 10 of 31 rows

Other info

Code

Follow for update