Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Calibrating Transformers via Sparse Gaussian Processes

About

Transformer models have achieved profound success in prediction tasks in a wide range of applications in natural language processing, speech recognition and computer vision. Extending Transformer's success to safety-critical domains requires calibrated uncertainty estimation which remains under-explored. To address this, we propose Sparse Gaussian Process attention (SGPA), which performs Bayesian inference directly in the output space of multi-head attention blocks (MHAs) in transformer to calibrate its uncertainty. It replaces the scaled dot-product operation with a valid symmetric kernel and uses sparse Gaussian processes (SGP) techniques to approximate the posterior processes of MHA outputs. Empirically, on a suite of prediction tasks on text, images and graphs, SGPA-based Transformers achieve competitive predictive accuracy, while noticeably improving both in-distribution calibration and out-of-distribution robustness and detection.

Wenlong Chen, Yingzhen Li• 2023

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionCIFAR-100
AUROC73.42
107
Out-of-Distribution DetectionSVHN
AUROC61.57
62
Out-of-Distribution DetectionLSUN
AUROC0.6734
26
Text ClassificationCoLA (test)
MCC31.53
8
Image ClassificationCIFAR-10 (test)
Accuracy75.59
8
Text ClassificationIMDB (test)
Accuracy0.8539
8
Showing 6 of 6 rows

Other info

Follow for update