How to Explain Individual Classification Decisions
About
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted the particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Mueller• 2009
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Histopathology Image Classification | NSCLC (test) | AUROC (Test)96 | 22 | |
| Tumor localization | CAMELYON16 (test) | AUC95 | 20 | |
| Trustworthiness evaluation | DVDs | Average F194.2 | 16 | |
| Trustworthiness evaluation | Books | Avg F194.3 | 16 | |
| MIL Explanation | Pos-Neg (test) | AUPRC0.72 | 16 | |
| MIL Explanation | 4-Bags (test) | AUPRC0.72 | 16 | |
| MIL Explanation | Adjacent Pairs (test) | AUPRC (Class 2)63 | 15 | |
| Biomarker Prediction | LUAD TP53 (test) | AUPC66 | 12 | |
| Biomarker Prediction | HNSC HPV (test) | AUPC87 | 12 |
Showing 9 of 9 rows