Better Language Models Exhibit Higher Visual Alignment
About
How well do text-only large language models (LLMs) align with the visual world? We present a systematic evaluation of this question by incorporating frozen representations of various language models into a discriminative vision-language framework and measuring zero-shot generalization to novel concepts. We find that decoder-based models exhibit stronger visual alignment than encoders, even when controlling for model and dataset size. Moreover, language modeling performance correlates with visual generalization, suggesting that advances in unimodal LLMs can simultaneously improve vision models. Leveraging these insights, we propose ShareLock, a lightweight method for fusing frozen vision and language backbones. ShareLock achieves robust performance across tasks while drastically reducing the need for paired data and compute. With just 563k image-caption pairs and under one GPU-hour of training, it reaches 51% accuracy on ImageNet. In cross-lingual settings, ShareLock dramatically outperforms CLIP, achieving 38.7% top-1 accuracy on Chinese image classification versus CLIP's 1.4%. Code is available.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet-1K | Top-1 Acc59.1 | 836 | |
| Text-to-Image Retrieval | Flickr30K | R@138.5 | 460 | |
| Image-to-Text Retrieval | Flickr30K | R@154.8 | 379 | |
| Image-to-Text Retrieval | MSCOCO | R@130 | 124 | |
| Text-to-Image Retrieval | MSCOCO | R@116.5 | 118 | |
| Compositional Reasoning | Winoground | Txt2Img Score26.3 | 21 | |
| Image Classification | CLIP Zero-shot Evaluation Suite (10 datasets) | Cars Accuracy13.2 | 16 |