Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Attacks on Neural Networks for Graph Data

About

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.

Daniel Z\"ugner, Amir Akbarnejad, Stephan G\"unnemann• 2018

Related benchmarks

TaskDatasetResultRank
Community DetectionOGB-arxiv
Avg Communities3.96
38
Community DetectionFlickr
M1 Score5.27
20
Community DetectionPhotos
M1 Score7.15
20
Community DetectionCora
M1 Score3.08
20
Counterfactual ExplanationsLoan-Decision
Misclassification Rate32
19
Community DetectionDBLP
F1 Score4.45
16
Counterfactual ExplanationAggregate of six datasets (including Cora)
Misclassification Rank (Avg)3.3
10
Counterfactual ExplanationOgbn-arxiv
Misclassification Rate86
10
Performance of counterfactual explanationsTREE-CYCLES
Misclass Rate58
10
Showing 9 of 9 rows

Other info

Follow for update