Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Relational Concept Bottleneck Models

About

The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept Bottleneck Models (CBMs), are not designed to solve relational problems, while relational deep learning models, such as Graph Neural Networks (GNNs), are not as interpretable as CBMs. To overcome these limitations, we propose Relational Concept Bottleneck Models (R-CBMs), a family of relational deep learning methods providing interpretable task predictions. As special cases, we show that R-CBMs are capable of both representing standard CBMs and message-passing GNNs. To evaluate the effectiveness and versatility of these models, we designed a class of experimental problems, ranging from image classification to link prediction in knowledge graphs. In particular we show that R-CBMs (i) match generalization performance of existing relational black-boxes, (ii) support the generation of quantified concept-based explanations, (iii) effectively respond to test-time interventions, and (iv) withstand demanding settings including out-of-distribution scenarios, limited training data regimes, and scarce concept supervisions.

Pietro Barbiero, Francesco Giannini, Gabriele Ciravegna, Michelangelo Diligenti, Giuseppe Marra• 2023

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy99.99
885
Link PredictionFB15k-237 (test)
Hits@1053.3
419
Link PredictionWN18RR (test)
Hits@1056.3
380
Multi-class classificationHanoi
ROC AUC100
11
Multi-class classificationRPS
ROC-AUC1
11
Binary ClassificationCiteseer
Accuracy67.16
7
Binary ClassificationPubmed
Accuracy75.86
7
Link PredictionCountries S1
MRR98.33
4
Link PredictionCountries S2
MRR0.9227
4
Showing 9 of 9 rows

Other info

Follow for update