Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing

About

Graph neural networks (GNNs) are the predominant architecture for learning over graphs. As with any machine learning model, an important issue is the detection of attacks, where an adversary can change the output with a small perturbation of the input. Techniques for solving the adversarial robustness problem - determining whether an attack exists - were originally developed for image classification. In the case of graph learning, the attack model usually considers changes to the graph structure in addition to or instead of the numerical features of the input, and the state of the art techniques proceed via reduction to constraint solving, working on top of powerful solvers, e.g. for mixed integer programming. We show that it is possible to improve on the state of the art in structural robustness by replacing the use of powerful solvers by calls to efficient partial solvers, which run in polynomial time but may be incomplete. We evaluate our tool RobLight on a diverse set of GNN variants and datasets.

Chia-Hsuan Lu, Tony Tan, Michael Benedikt• 2025

Related benchmarks

TaskDatasetResultRank
Node Classification Robustness CertificationTexas
Solved Instances Count (All)732
45
Node Classification Robustness CertificationCornell
Solved Count (All Instances)732
36
Node Classification Robustness CertificationWisconsin
Time (s) (All Instances)0.01
36
Node Classification Robustness CertificationCora
Solved Instances Count (All)1.08e+4
30
Node ClassificationCiteseer
Count568
28
Node ClassificationCora
Number of Trials580
26
Node ClassificationWisconsin
Count65
22
Robustness VerificationENZYMES Robust instances
Solved Count559
20
Node ClassificationCornell
Iterations73
18
Node Classification Robustness CertificationCiteseer
Count Solved (All Instances)3.31e+3
18
Showing 10 of 46 rows

Other info

Follow for update