Waffling around for Performance: Visual Classification with Random Words and Broad Concepts
About
The visual classification performance of vision-language models such as CLIP has been shown to benefit from additional semantic knowledge from large language models (LLMs) such as GPT-3. In particular, averaging over LLM-generated class descriptors, e.g. "waffle, which has a round shape", can notably improve generalization performance. In this work, we critically study this behavior and propose WaffleCLIP, a framework for zero-shot visual classification which simply replaces LLM-generated descriptors with random character and word descriptors. Without querying external models, we achieve comparable performance gains on a large number of visual classification tasks. This allows WaffleCLIP to both serve as a low-cost alternative, as well as a sanity check for any future LLM-based vision-language model extensions. We conduct an extensive experimental study on the impact and shortcomings of additional semantics introduced with LLM-generated descriptors, and showcase how - if available - semantic context is better leveraged by querying LLMs for high-level concepts, which we show can be done to jointly resolve potential class name ambiguities. Code is available here: https://github.com/ExplainableML/WaffleCLIP.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet-1K | Top-1 Acc68.81 | 524 | |
| Image Classification | EuroSAT | -- | 497 | |
| Image Classification | Food-101 | -- | 494 | |
| Image Classification | DTD | Accuracy40.05 | 487 | |
| Image Classification | ImageNet | -- | 429 | |
| Image Classification | SUN397 | -- | 425 | |
| Image Classification | UCF101 | Top-1 Acc67.19 | 404 | |
| Image Classification | ImageNet 1k (test) | Top-1 Accuracy76.1 | 359 | |
| Image Classification | ImageNet (test) | Top-1 Accuracy75.31 | 291 | |
| Image Classification | StanfordCars | Accuracy63.57 | 266 |