Network Dissection: Quantifying Interpretability of Deep Visual Representations
About
We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Interpretability Evaluation | MS-COCO | -- | 40 | |
| Compositional Explanation | ImageNet | IoU0.04 | 24 | |
| Neuron Explanation Analysis | ADE20K | IoU5 | 24 | |
| Neuron Explanation Quality Evaluation | Pascal | IoU5 | 8 | |
| Neuron Explanation | ImageNet subset of 20,000 images 2012 (val) | Explanation Accuracy19.7 | 6 | |
| Interpretability Evaluation | ImageNet | Top-1 Precision24 | 4 | |
| Neuron Explanation | MS COCO subset of 20 categories 2017 (val) | Explanation Accuracy95.24 | 2 | |
| Interpretability Evaluation | Places365 | Top-1 Precision70 | 2 | |
| Neuron Explanation | MS COCO subset of 24,237 images 2017 (train) | Explanation Accuracy95.06 | 2 |