InterpretML: A Unified Framework for Machine Learning Interpretability
About
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.com/microsoft/interpret.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Regression | California Housing (CH) (test) | MSE0.262 | 52 | |
| Binary Classification | Higgs (test) | AUC69.8 | 30 | |
| Classification | MIMIC-III (test) | AUROC84 | 13 | |
| Regression | Wine Quality (test) | MSE0.439 | 11 | |
| Regression | Bike Sharing (test) | MSE0.124 | 11 | |
| Regression | Appliances Energy (test) | MSE0.74 | 11 | |
| Regression | Song Year (test) | MSE0.894 | 11 |