Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InterpretML: A Unified Framework for Machine Learning Interpretability

About

InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.com/microsoft/interpret.

Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana• 2019

Related benchmarks

TaskDatasetResultRank
RegressionCalifornia Housing (CH) (test)
MSE0.262
52
Binary ClassificationHiggs (test)
AUC69.8
30
ClassificationMIMIC-III (test)
AUROC84
13
RegressionWine Quality (test)
MSE0.439
11
RegressionBike Sharing (test)
MSE0.124
11
RegressionAppliances Energy (test)
MSE0.74
11
RegressionSong Year (test)
MSE0.894
11
Showing 7 of 7 rows

Other info

Follow for update