Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hidden Monotonicity: Explaining Deep Neural Networks via their DC Decomposition

About

It has been demonstrated in various contexts that monotonicity leads to better explainability in neural networks. However, not every function can be well approximated by a monotone neural network. We demonstrate that monotonicity can still be used in two ways to boost explainability. First, we use an adaptation of the decomposition of a trained ReLU network into two monotone and convex parts, thereby overcoming numerical obstacles from an inherent blowup of the weights in this procedure. Our proposed saliency methods - SplitCAM and SplitLRP - improve on state of the art results on both VGG16 and Resnet18 networks on ImageNet-S across all Quantus saliency metric categories. Second, we exhibit that training a model as the difference between two monotone neural networks results in a system with strong self-explainability properties.

Jakob Paul Zimmermann, Georg Loho• 2026

Related benchmarks

TaskDatasetResultRank
XAI EvaluationImageNet-S (val)
Selection Score5.633
28
XAI AttributionImageNet-S (val)
Attribution Localization0.651
19
Explainable AI EvaluationImageNet-S
Selection6.785
17
Showing 3 of 3 rows

Other info

Follow for update