Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning

About

Contrastive Language-Image Pre-training (CLIP) has become a cornerstone in vision-language representation learning, powering diverse downstream tasks and serving as the default vision backbone in multimodal large language models (MLLMs). Despite its success, CLIP's dense and opaque latent representations pose significant interpretability challenges. A common assumption is that interpretability and performance are in tension: enforcing sparsity during training degrades accuracy, motivating recent post-hoc approaches such as Sparse Autoencoders (SAEs). However, these post-hoc approaches often suffer from degraded downstream performance and loss of CLIP's inherent multimodal capabilities, with most learned features remaining unimodal. We propose a simple yet effective approach that integrates sparsity directly into CLIP training, yielding representations that are both interpretable and performant. Compared to SAEs, our Sparse CLIP representations preserve strong downstream task performance, achieve superior interpretability, and retain multimodal capabilities. We show that multimodal sparse features enable straightforward semantic concept alignment and reveal training dynamics of how cross-modal knowledge emerges. Finally, as a proof of concept, we train a vision-language model on sparse CLIP representations that enables interpretable, vision-based steering capabilities. Our findings challenge conventional wisdom that interpretability requires sacrificing accuracy and demonstrate that interpretability and performance can be co-optimized, offering a promising design principle for future models.

Chuan Qin, Constantin Venhoff, Sonia Joseph, Fanyi Xiao, Stefan Scherer• 2026

Related benchmarks

TaskDatasetResultRank
Image RetrievalCOCO (val)
Recall@568.5
28
Fine-grained Image ClassificationFine-Grained Classification Suite zero-shot
Avg Accuracy81
5
Image ClassificationImageNet and Robustness Variants zero-shot v1, v2, A, R, S, ObjectNet
Average Class Accuracy75.6
5
Bounding-box classificationCOCO (val)
Top-1 Acc56
4
Text RetrievalCOCO (val)
TR@159.9
4
Showing 5 of 5 rows

Other info

Follow for update