Model Agnostic Interpretability for Multiple Instance Learning
About
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Histopathology Image Classification | NSCLC (test) | AUROC (Test)96 | 22 | |
| Tumor localization | CAMELYON16 (test) | AUC95 | 20 | |
| MIL Explanation | 4-Bags (test) | AUPRC0.89 | 16 | |
| MIL Explanation | Pos-Neg (test) | AUPRC0.91 | 16 | |
| MIL Explanation | Adjacent Pairs (test) | AUPRC (Class 2)77 | 15 | |
| Biomarker Prediction | LUAD TP53 (test) | AUPC73 | 12 | |
| Biomarker Prediction | HNSC HPV (test) | AUPC92 | 12 |