Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing
About
Graph neural networks (GNNs) are the predominant architecture for learning over graphs. As with any machine learning model, an important issue is the detection of attacks, where an adversary can change the output with a small perturbation of the input. Techniques for solving the adversarial robustness problem - determining whether an attack exists - were originally developed for image classification. In the case of graph learning, the attack model usually considers changes to the graph structure in addition to or instead of the numerical features of the input, and the state of the art techniques proceed via reduction to constraint solving, working on top of powerful solvers, e.g. for mixed integer programming. We show that it is possible to improve on the state of the art in structural robustness by replacing the use of powerful solvers by calls to efficient partial solvers, which run in polynomial time but may be incomplete. We evaluate our tool RobLight on a diverse set of GNN variants and datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Node Classification Robustness Certification | Texas | Solved Instances Count (All)732 | 45 | |
| Node Classification Robustness Certification | Cornell | Solved Count (All Instances)732 | 36 | |
| Node Classification Robustness Certification | Wisconsin | Time (s) (All Instances)0.01 | 36 | |
| Node Classification Robustness Certification | Cora | Solved Instances Count (All)1.08e+4 | 30 | |
| Node Classification | Citeseer | Count568 | 28 | |
| Node Classification | Cora | Number of Trials580 | 26 | |
| Node Classification | Wisconsin | Count65 | 22 | |
| Robustness Verification | ENZYMES Robust instances | Solved Count559 | 20 | |
| Node Classification | Cornell | Iterations73 | 18 | |
| Node Classification Robustness Certification | Citeseer | Count Solved (All Instances)3.31e+3 | 18 |