Labeling Neural Representations with Inverse Recognition
About
Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown. Existing global explainability methods, such as Network Dissection, face limitations such as reliance on segmentation masks, lack of statistical significance testing, and high computational demands. We propose Inverse Recognition (INVERT), a scalable approach for connecting learned representations with human-understandable concepts by leveraging their capacity to discriminate between these concepts. In contrast to prior work, INVERT is capable of handling diverse types of neurons, exhibits less computational complexity, and does not rely on the availability of segmentation masks. Moreover, INVERT provides an interpretable metric assessing the alignment between the representation and its corresponding explanation and delivering a measure of statistical significance. We demonstrate the applicability of INVERT in various scenarios, including the identification of representations affected by spurious correlations, and the interpretation of the hierarchical structure of decision-making within the models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Neuron description | ImageNet | AUC96 | 15 | |
| Neuron Interpretation | ImageNet CoSy benchmark avgpool layer 1k | AUC0.88 | 12 | |
| Neuron description | Places365 | AUC0.94 | 6 | |
| Neuron Explanation | ImageNet subset of 20,000 images 2012 (val) | Explanation Accuracy86.9 | 6 | |
| Concept Discovery | ImageNet | AUC10 | 5 | |
| Neuron Interpretation | Places365 CoSy benchmark avgpool layer | AUC81 | 4 | |
| Concept Discovery | Places365 | AUC10 | 2 | |
| Neuron Explanation | MS COCO subset of 24,237 images 2017 (train) | Explanation Accuracy98.77 | 2 | |
| Neuron Explanation | MS COCO subset of 20 categories 2017 (val) | Explanation Accuracy95.24 | 2 |