Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Delving into Out-of-Distribution Detection with Vision-Language Representations

About

Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 13.1% (AUROC). Code is available at https://github.com/deeplearning-wisc/MCM.

Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, Yixuan Li• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)--
1453
Image ClassificationImageNet-1K--
524
Out-of-Distribution DetectioniNaturalist
FPR@9532.08
200
Out-of-Distribution DetectionSUN OOD with ImageNet-1k In-distribution (test)
FPR@9525.05
159
Out-of-Distribution DetectionTextures
AUROC0.8596
141
Out-of-Distribution DetectionPlaces
FPR9544.88
110
Out-of-Distribution DetectionPlaces with ImageNet-1k OOD In-distribution (test)
FPR9535.42
99
Out-of-Distribution DetectionImageNet-1k ID iNaturalist OOD
FPR9531.95
87
Image ClassificationImageNet-100--
84
OOD DetectionPlaces (OOD)
AUROC90.09
76
Showing 10 of 97 rows
...

Other info

Code

Follow for update