Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Krause Synchronization Transformers

About

Self-attention in Transformers relies on globally normalized softmax weights, causing all tokens to compete for influence at every layer. When composed across depth, this interaction pattern induces strong synchronization dynamics that favor convergence toward a dominant mode, a behavior associated with representation collapse and attention sink phenomena. We introduce Krause Attention, a principled attention mechanism inspired by bounded-confidence consensus dynamics. Krause Attention replaces similarity-based global aggregation with distance-based, localized, and selectively sparse interactions, promoting structured local synchronization instead of global mixing. We relate this behavior to recent theory modeling Transformer dynamics as interacting particle systems, and show how bounded-confidence interactions naturally moderate attention concentration and alleviate attention sinks. Restricting interactions to local neighborhoods also reduces runtime complexity from quadratic to linear in sequence length. Experiments across vision (ViT on CIFAR/ImageNet), autoregressive generation (MNIST/CIFAR-10), and large language models (Llama/Qwen) demonstrate consistent gains with substantially reduced computation, highlighting bounded-confidence dynamics as a scalable and effective inductive bias for attention.

Jingkun Liu, Yisong Yue, Max Welling, Yue Song• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy77.77
647
Instruction FollowingIFEval
Accuracy (0-100)34.01
292
Question AnsweringBoolQ
Accuracy84.78
240
Image ClassificationImageNet-1K
Accuracy75.69
190
ReasoningPIQA
Accuracy73.7
133
Image ClassificationCIFAR-10
Accuracy95.35
101
Language UnderstandingMMLU-Pro
Accuracy41.67
70
Natural Language InferenceMNLI
Accuracy83.83
22
Image ClassificationFashion MNIST
Accuracy96.1
16
Image GenerationMNIST (test)--
13
Showing 10 of 15 rows

Other info

Follow for update