Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Automatic Concept-based Explanations

About

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions. Most of the current explanation methods provide explanations through feature importance scores, which identify features that are important for each individual input. However, how to systematically summarize and interpret such per sample feature importance scores itself is challenging. In this work, we propose principles and desiderata for \emph{concept} based explanation, which goes beyond per-sample features to identify higher-level human-understandable concepts that apply across the entire dataset. We develop a new algorithm, ACE, to automatically extract visual concepts. Our systematic experiments demonstrate that \alg discovers concepts that are human-meaningful, coherent and important for the neural network's predictions.

Amirata Ghorbani, James Wexler, James Zou, Been Kim• 2019

Related benchmarks

TaskDatasetResultRank
Explainability EvaluationHusky vs. Wolf
Session 1 Utility Score68.8
11
Explainability EvaluationLeaves
Session 179.8
11
Explainability EvaluationKit Fox vs. Red Fox
Session 1 Utility Score0.484
11
Predicting Model OutputHusky vs. Wolf
Session 1 Accuracy60.4
5
Predicting Model OutputKit Fox vs. Red Fox
Accuracy (Session 1)80.6
5
Predicting Model OutputOtter vs. Beaver
Accuracy (Session 1)80.4
5
Showing 6 of 6 rows

Other info

Follow for update