Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Using the Path of Least Resistance to Explain Deep Networks

About

Integrated Gradients (IG), a widely used axiomatic path-based attribution method, assigns importance scores to input features by integrating model gradients along a straight path from a baseline to the input. While effective in some cases, we show that straight paths can lead to flawed attributions. In this paper, we identify the cause of these misattributions and propose an alternative approach that equips the input space with a model-induced Riemannian metric (derived from the explained model's Jacobian) and computes attributions by integrating gradients along geodesics under this metric. We call this method Geodesic Integrated Gradients (GIG). To approximate geodesic paths, we introduce two techniques: a k-Nearest Neighbours-based approach for smaller models and a Stochastic Variational Inference-based method for larger ones. Additionally, we propose a new axiom, No-Cancellation Completeness (NCC), which strengthens completeness by ruling out feature-wise cancellation. We prove that, for path-based attributions under the model-induced metric, NCC holds if and only if the integration path is a geodesic. Through experiments on both synthetic and real-world image classification data, we provide empirical evidence supporting our theoretical analysis and showing that GIG produces more faithful attributions than existing methods, including IG, on the benchmarks considered.

Sina Salek, Joseph Enguehard• 2025

Related benchmarks

TaskDatasetResultRank
Feature AttributionSynthetic half-moons dataset with Gaussian noises (std dev 0.05-0.65)
AUC-Purity0.531
10
Feature AttributionPascal VOC (test)
AUC-Comp0.27
8
Showing 2 of 2 rows

Other info

Follow for update