Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Out-of-Distribution Detection for Pretrained Transformers

About

Pretrained Transformers achieve remarkable performance when training and test data are from the same distribution. However, in real-world scenarios, the model often faces out-of-distribution (OOD) instances that can cause severe semantic shift problems at inference time. Therefore, in practice, a reliable model should identify such instances, and then either reject them during inference or pass them over to models that handle another distribution. In this paper, we develop an unsupervised OOD detection method, in which only the in-distribution (ID) data are used in training. We propose to fine-tune the Transformers with a contrastive loss, which improves the compactness of representations, such that OOD instances can be better differentiated from ID ones. These OOD instances can then be accurately detected using the Mahalanobis distance in the model's penultimate layer. We experiment with comprehensive settings and achieve near-perfect OOD detection performance, outperforming baselines drastically. We further investigate the rationales behind the improvement, finding that more compact representations through margin-based contrastive learning bring the improvement. We release our code to the community for future research.

Wenxuan Zhou, Fangyu Liu, Muhao Chen• 2021

Related benchmarks

TaskDatasetResultRank
Machine-generated text detectionGrover (test)
Accuracy72.15
36
Out-of-Distribution DetectionCLINCFULL
FPR11.24
34
OOD DetectionYelp (test)
AUROC97.1
34
Out-of-Distribution DetectionCLINCSMALL
FPR0.1391
34
Out-of-Distribution DetectionCLINC Full (test)
AUROC97.18
21
OOD DetectionSST2 (test)
AUROC0.9016
17
Out-of-Distribution DetectionNews
FPR69.31
17
Out-of-Distribution DetectionCLINC SMALL (test)
AUROC96.82
17
OOD DetectionNEWSTOP5 (test)
AUROC80.19
17
Out-of-Distribution DetectionYelp
FPR16.97
17
Showing 10 of 27 rows

Other info

Follow for update