Calibrating Transformers via Sparse Gaussian Processes
About
Transformer models have achieved profound success in prediction tasks in a wide range of applications in natural language processing, speech recognition and computer vision. Extending Transformer's success to safety-critical domains requires calibrated uncertainty estimation which remains under-explored. To address this, we propose Sparse Gaussian Process attention (SGPA), which performs Bayesian inference directly in the output space of multi-head attention blocks (MHAs) in transformer to calibrate its uncertainty. It replaces the scaled dot-product operation with a valid symmetric kernel and uses sparse Gaussian processes (SGP) techniques to approximate the posterior processes of MHA outputs. Empirically, on a suite of prediction tasks on text, images and graphs, SGPA-based Transformers achieve competitive predictive accuracy, while noticeably improving both in-distribution calibration and out-of-distribution robustness and detection.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Out-of-Distribution Detection | CIFAR-100 | AUROC73.42 | 107 | |
| Out-of-Distribution Detection | SVHN | AUROC61.57 | 62 | |
| Out-of-Distribution Detection | LSUN | AUROC0.6734 | 26 | |
| Text Classification | CoLA (test) | MCC31.53 | 8 | |
| Image Classification | CIFAR-10 (test) | Accuracy75.59 | 8 | |
| Text Classification | IMDB (test) | Accuracy0.8539 | 8 |