Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Activation Matters: Test-time Activated Negative Labels for OOD Detection with Vision-Language Models

About

Out-of-distribution (OOD) detection aims to identify samples that deviate from in-distribution (ID). One popular pipeline addresses this by introducing negative labels distant from ID classes and detecting OOD based on their distance to these labels. However, such labels may present poor activation on OOD samples, failing to capture the OOD characteristics. To address this, we propose \underline{T}est-time \underline{A}ctivated \underline{N}egative \underline{L}abels (TANL) by dynamically evaluating activation levels across the corpus dataset and mining candidate labels with high activation responses during the testing process. Specifically, TANL identifies high-confidence test images online and accumulates their assignment probabilities over the corpus to construct a label activation metric. Such a metric leverages historical test samples to adaptively align with the test distribution, enabling the selection of distribution-adaptive activated negative labels. By further exploring the activation information within the current testing batch, we introduce a more fine-grained, batch-adaptive variant. To fully utilize label activation knowledge, we propose an activation-aware score function that emphasizes negative labels with stronger activations, boosting performance and enhancing its robustness to the label number. Our TANL is training-free, test-efficient, and grounded in theoretical justification. Experiments on diverse backbones and wide task settings validate its effectiveness. Notably, on the large-scale ImageNet benchmark, TANL significantly reduces the FPR95 from 17.5\% to 9.8\%. Codes are available at \href{https://github.com/YBZh/OpenOOD-VLM}{YBZh/OpenOOD-VLM}.

Yabin Zhang, Maya Varma, Yunhe Gao, Jean-Benoit Delbrouck, Jiaming Liu, Chong Wang, Curtis Langlotz• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Accuracy66.82
92
OOD DetectioniNaturalist (OOD) / ImageNet-1k (ID) 1.0 (test)
FPR950.42
64
Out-of-Distribution DetectionImageNet Far-OOD
AUROC96.43
52
OOD DetectionImageNet-1k ID Average OOD
AUROC0.9797
50
OOD DetectionImageNet-1K
Average FPR959.81
44
Out-of-Distribution DetectionImageNet-1k (ID) vs Textures (OOD) 1.0 (test)
AUC97.11
40
OOD DetectionImageNet-1k (ID) vs Sun (OOD) 1.0 (test)
AUROC99.07
31
OOD DetectionImageNet-1k (ID) vs Places (OOD) 1.0 (test)
AUROC95.87
31
OOD DetectionOpenOOD CIFAR100 as ID v1.5 (test)
AUROC (Near-OOD)85.06
15
Out-of-Distribution DetectionImageNet-1k Near-OOD
AUROC84.53
9
Showing 10 of 15 rows

Other info

Follow for update