Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pathologies in priors and inference for Bayesian transformers

About

In recent years, the transformer has established itself as a workhorse in many applications ranging from natural language processing to reinforcement learning. Similarly, Bayesian deep learning has become the gold-standard for uncertainty estimation in safety-critical applications, where robustness and calibration are crucial. Surprisingly, no successful attempts to improve transformer models in terms of predictive uncertainty using Bayesian inference exist. In this work, we study this curiously underpopulated area of Bayesian transformers. We find that weight-space inference in transformers does not work well, regardless of the approximate posterior. We also find that the prior is at least partially at fault, but that it is very hard to find well-specified weight priors for these models. We hypothesize that these problems stem from the complexity of obtaining a meaningful mapping from weight-space to function-space distributions in the transformer. Therefore, moving closer to function-space, we propose a novel method based on the implicit reparameterization of the Dirichlet distribution to apply variational inference directly to the attention weights. We find that this proposed method performs competitively with our baselines.

Tristan Cinquin, Alexander Immer, Max Horn, Vincent Fortuin• 2021

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionMIMIC-III Newborn ICU Records
AUC-OOD0.77
6
ICU mortality predictionMIMIC Adult ICU Records III
AUROC0.895
6
Showing 2 of 2 rows

Other info

Follow for update