Delving into Out-of-Distribution Detection with Vision-Language Representations
About
Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 13.1% (AUROC). Code is available at https://github.com/deeplearning-wisc/MCM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet-1k (val) | -- | 1453 | |
| Image Classification | ImageNet-1K | -- | 524 | |
| Out-of-Distribution Detection | iNaturalist | FPR@9532.08 | 200 | |
| Out-of-Distribution Detection | SUN OOD with ImageNet-1k In-distribution (test) | FPR@9525.05 | 159 | |
| Out-of-Distribution Detection | Textures | AUROC0.8596 | 141 | |
| Out-of-Distribution Detection | Places | FPR9544.88 | 110 | |
| Out-of-Distribution Detection | Places with ImageNet-1k OOD In-distribution (test) | FPR9535.42 | 99 | |
| Out-of-Distribution Detection | ImageNet-1k ID iNaturalist OOD | FPR9531.95 | 87 | |
| Image Classification | ImageNet-100 | -- | 84 | |
| OOD Detection | Places (OOD) | AUROC90.09 | 76 |