Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DICE: Leveraging Sparsification for Out-of-Distribution Detection

About

Detecting out-of-distribution (OOD) inputs is a central challenge for safely deploying machine learning models in the real world. Previous methods commonly rely on an OOD score derived from the overparameterized weight space, while largely overlooking the role of sparsification. In this paper, we reveal important insights that reliance on unimportant weights and units can directly attribute to the brittleness of OOD detection. To mitigate the issue, we propose a sparsification-based OOD detection framework termed DICE. Our key idea is to rank weights based on a measure of contribution, and selectively use the most salient weights to derive the output for OOD detection. We provide both empirical and theoretical insights, characterizing and explaining the mechanism by which DICE improves OOD detection. By pruning away noisy signals, DICE provably reduces the output variance for OOD data, resulting in a sharper output distribution and stronger separability from ID data. We demonstrate the effectiveness of sparsification-based OOD detection on several benchmarks and establish competitive performance.

Yiyou Sun, Yixuan Li• 2021

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectioniNaturalist
AUROC94.53
219
Out-of-Distribution DetectionSUN OOD with ImageNet-1k In-distribution (test)
FPR@9525.45
204
Out-of-Distribution DetectionTextures
AUROC0.9204
168
Out-of-Distribution DetectionPlaces
FPR9546.49
142
Out-of-Distribution DetectionImageNet OOD Average 1k (test)
FPR@9527.25
137
Out-of-Distribution DetectionImageNet-1k ID iNaturalist OOD
FPR9539.69
132
OOD DetectionCIFAR-10 (IND) SVHN (OOD)
AUROC0.9538
131
Out-of-Distribution DetectionCIFAR-10
AUROC90.66
121
OOD DetectionCIFAR-10 (ID) vs Places 365 (OOD)
AUROC82.84
117
Out-of-Distribution DetectionTexture
AUROC91.46
113
Showing 10 of 141 rows
...

Other info

Code

Follow for update