Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How to Explain Individual Classification Decisions

About

After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted the particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Mueller• 2009

Related benchmarks

TaskDatasetResultRank
Histopathology Image ClassificationNSCLC (test)
AUROC (Test)96
22
Tumor localizationCAMELYON16 (test)
AUC95
20
Trustworthiness evaluationDVDs
Average F194.2
16
Trustworthiness evaluationBooks
Avg F194.3
16
MIL ExplanationPos-Neg (test)
AUPRC0.72
16
MIL Explanation4-Bags (test)
AUPRC0.72
16
MIL ExplanationAdjacent Pairs (test)
AUPRC (Class 2)63
15
Biomarker PredictionLUAD TP53 (test)
AUPC66
12
Biomarker PredictionHNSC HPV (test)
AUPC87
12
Showing 9 of 9 rows

Other info

Follow for update