Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue

About

Overconfident predictions on out-of-distribution (OOD) samples is a thorny issue for deep neural networks. The key to resolve the OOD overconfidence issue inherently is to build a subset of OOD samples and then suppress predictions on them. This paper proposes the Chamfer OOD examples (CODEs), whose distribution is close to that of in-distribution samples, and thus could be utilized to alleviate the OOD overconfidence issue effectively by suppressing predictions on them. To obtain CODEs, we first generate seed OOD examples via slicing&splicing operations on in-distribution samples from different categories, and then feed them to the Chamfer generative adversarial network for distribution transformation, without accessing to any extra data. Training with suppressing predictions on CODEs is validated to alleviate the OOD overconfidence issue largely without hurting classification accuracy, and outperform the state-of-the-art methods. Besides, we demonstrate CODEs are useful for improving OOD detection and classification.

Keke Tang, Dingruibo Miao, Weilong Peng, Jianpeng Wu, Yawen Shi, Zhaoquan Gu, Zhihong Tian, Wenping Wang• 2021

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectioniNaturalist
FPR@9556.38
200
Out-of-Distribution DetectionTextures--
141
Out-of-Distribution DetectionPlaces
FPR9568.1
110
Out-of-Distribution DetectionSUN
FPR@9570.23
71
Out-of-Distribution DetectionMNIST--
13
Out-of-Distribution DetectionFMNIST--
13
Out-of-Distribution DetectionCIFAR-10
SVHN OOD Score72.23
9
Out-of-Distribution DetectionCIFAR-100-C
MMC74.96
9
Out-of-Distribution DetectionCIFAR-100
SVHN Score82.08
9
Confidence calibrationMNIST ID (test)
ECE0.31
9
Showing 10 of 17 rows

Other info

Follow for update