Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MeLIME: Meaningful Local Explanation for Machine Learning Models

About

Most state-of-the-art machine learning algorithms induce black-box models, preventing their application in many sensitive domains. Hence, many methodologies for explaining machine learning models have been proposed to address this problem. In this work, we introduce strategies to improve local explanations taking into account the distribution of the data used to train the black-box models. We show that our approach, MeLIME, produces more meaningful explanations compared to other techniques over different ML models, operating on various types of data. MeLIME generalizes the LIME method, allowing more flexible perturbation sampling and the use of different local interpretable models. Additionally, we introduce modifications to standard training algorithms of local interpretable models fostering more robust explanations, even allowing the production of counterfactual examples. To show the strengths of the proposed approach, we include experiments on tabular data, images, and text; all showing improved explanations. In particular, MeLIME generated more meaningful explanations on the MNIST dataset than methods such as GuidedBackprop, SmoothGrad, and Layer-wise Relevance Propagation. MeLIME is available on https://github.com/tiagobotari/melime.

Tiago Botari, Frederik Hvilsh{\o}j, Rafael Izbicki, Andre C. P. L. F. de Carvalho• 2020

Related benchmarks

TaskDatasetResultRank
Feature AttributionIris
INFD0.008
17
Feature AttributionFMNIST
INFD0.001
13
Feature AttributionRotten Tomatoes
INFD0.029
13
Feature AttributionCIFAR10
INFD0.1
13
Showing 4 of 4 rows

Other info

Follow for update