Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

About

Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized to be non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificial datasets, often with bias or contaminated discriminating content. Through their increased distribution, decision-making algorithms can contribute promoting prejudge and unfairness which is not easy to notice due to lack of transparency. Hence, scientists developed several so-called explanators or explainers which try to point out the connection between input and output to represent in a simplified way the inner structure of machine learning black boxes. In this survey we differ the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.

Vanessa Buhrmester, David M\"unch, Michael Arens• 2019

Related benchmarks

TaskDatasetResultRank
Model CalibrationMN
MAE5.413
20
Model CalibrationBN
MAE22.9
20
CalibrationCovertype label 2 (test)
Expected Calibration Error4.268
10
Model CalibrationBN 0.3
MAE6.593
10
Model CalibrationMN 0.3
MAE6.031
10
Model CalibrationBN 10
MAE54.89
10
CalibrationCovertype label 1 (test)
ECE4.207
10
Showing 7 of 7 rows

Other info

Follow for update