Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)

About

CLIP embeddings have demonstrated remarkable performance across a wide range of multimodal applications. However, these high-dimensional, dense vector representations are not easily interpretable, limiting our understanding of the rich structure of CLIP and its use in downstream applications that require transparency. In this work, we show that the semantic structure of CLIP's latent space can be leveraged to provide interpretability, allowing for the decomposition of representations into semantic concepts. We formulate this problem as one of sparse recovery and propose a novel method, Sparse Linear Concept Embeddings, for transforming CLIP representations into sparse linear combinations of human-interpretable concepts. Distinct from previous work, SpLiCE is task-agnostic and can be used, without training, to explain and even replace traditional dense CLIP representations, maintaining high downstream performance while significantly improving their interpretability. We also demonstrate significant use cases of SpLiCE representations including detecting spurious correlations and model editing.

Usha Bhalla, Alex Oesterling, Suraj Srinivas, Flavio P. Calmon, Himabindu Lakkaraju• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Top-1 Accuracy58.29
175
Concept DecompositionImageNet 500 random classes (test)
Zero-Shot Accuracy48
14
InferenceImageNet-100
Embedding Latency (ms/img)4.5
4
Image ClassificationCIFAR-100
Accuracy (Seen Classes)24.8
4
Image ClassificationImageNet-100
Seen Accuracy37.1
4
Image ClassificationImageNet-1K
Seen Score27.5
4
Image ClassificationPlaces365
Accuracy (Seen)27.6
4
Showing 7 of 7 rows

Other info

Code

Follow for update