Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QUEST: A robust attention formulation using query-modulated spherical attention

About

The Transformer model architecture has become one of the most widely used in deep learning and the attention mechanism is at its core. The standard attention formulation uses a softmax operation applied to a scaled dot product between query and key vectors. We explore the role played by norms of the queries and keys, which can cause training instabilities when they arbitrarily increase. We demonstrate how this can happen even in simple Transformer models, in the presence of easy-to-learn spurious patterns in the data. We propose a new attention formulation, QUEry-modulated Spherical aTtention (QUEST), that constrains the keys to a hyperspherical latent space, while still allowing individual tokens to flexibly control the sharpness of the attention distribution. QUEST can be easily used as a drop-in replacement for standard attention. We focus on vision applications while also exploring other domains to highlight the method's generality. We show that (1) QUEST trains without instabilities and (2) produces models with improved performance (3) that are robust to data corruptions and adversarial attacks.

Hariprasath Govindarajan, Per Sid\'en, Jacob Roll, Fredrik Lindsten• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc46.2
654
Image ClassificationImageNet V2
Top-1 Acc74.3
611
Language ModelingWikiText-103 (val)
PPL22.436
214
Image ClassificationImageNet-ReaL
Precision@188.9
211
Image ClassificationImageNet-C
mCE32.3
115
Graph ClassificationCIFAR10
Accuracy72.843
110
Graph RegressionZINC
MAE0.069
105
Video Object SegmentationDAVIS 2017
Jaccard Index (J)61.8
82
Graph RegressionPeptides-struct
MAE0.251
76
Image RetrievalROxford--
67
Showing 10 of 43 rows

Other info

Follow for update