Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Encoding-Decoding Direction Pairs to Unveil Concepts of Influence in Deep Vision Networks

About

Empirical evidence shows that deep vision networks often represent concepts as directions in latent space with concept information written along directional components in the vector representation of the input. However, the mechanism to encode (write) and decode (read) concept information to and from vector representations is not directly accessible as it constitutes a latent mechanism that naturally emerges from the training process of the network. Recovering this mechanism unlocks significant potential to open the black-box nature of deep networks, enabling understanding, debugging, and improving deep learning models. In this work, we propose an unsupervised method to recover this mechanism. For each concept, we explain that under the hypothesis of linear concept representations, this mechanism can be implemented with the help of two directions: the first facilitating encoding of concept information and the second facilitating decoding. Unlike prior matrix decomposition, autoencoder, or dictionary learning methods that rely on feature reconstruction, we propose a new perspective: decoding directions are identified via directional clustering of activations, and encoding directions are estimated with signal vectors under a probabilistic view. We further leverage network weights through a novel technique, Uncertainty Region Alignment, which reveals interpretable directions affecting predictions. Our analysis shows that (a) on synthetic data, our method recovers ground-truth direction pairs; (b) on real data, decoding directions map to monosemantic, interpretable concepts and outperform unsupervised baselines; and (c) signal vectors faithfully estimate encoding directions, validated via activation maximization. Finally, we demonstrate applications in understanding global model behavior, explaining individual predictions, and intervening to produce counterfactuals or correct errors.

Alexandros Doumanoglou, Kurt Driessens, Dimitrios Zarpalas• 2025

Related benchmarks

TaskDatasetResultRank
Clustering QualityImageNet
Coverage85
12
Concept interpretabilityImageNet
Precision63
12
Influence AnalysisImageNet
I191
12
Interpretability EvaluationImageNet Inception-v3
Coverage93
12
Interpretable Direction DiscoveryPlaces365
Coverage86
12
Latent Direction AnalysisMoments in Time (MiT)
Coverage88
12
Network DissectionBroden
Concept Detectors (Color)2
12
Semantic segmentationImageNet
S1 Score41.2
12
Monosemanticity EvaluationImageNet
M Metric7.09
12
Concept DiscoveryImageNet
Coverage85
10
Showing 10 of 10 rows

Other info

Follow for update