Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Perturbations to Explain Time Series Predictions

About

Explaining predictions based on multivariate time series data carries the additional difficulty of handling not only multiple features, but also time dependencies. It matters not only what happened, but also when, and the same feature could have a very different impact on a prediction depending on this time information. Previous work has used perturbation-based saliency methods to tackle this issue, perturbing an input using a trainable mask to discover which features at which times are driving the predictions. However these methods introduce fixed perturbations, inspired from similar methods on static data, while there seems to be little motivation to do so on temporal data. In this work, we aim to explain predictions by learning not only masks, but also associated perturbations. We empirically show that learning these perturbations significantly improves the quality of these explanations on time series data.

Joseph Enguehard• 2023

Related benchmarks

TaskDatasetResultRank
Clinical predictionPhysioNet sepsis benchmark 2019 (test)
CPD1.16
15
XAI Attribution Faithfulness and SufficiencyMIMIC-III decompensation
CPD16.66
15
Showing 2 of 2 rows

Other info

Follow for update