Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vision Transformers Need Registers

About

Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative background areas of images, that are repurposed for internal computations. We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision Transformer to fill that role. We show that this solution fixes that problem entirely for both supervised and self-supervised models, sets a new state of the art for self-supervised visual models on dense visual prediction tasks, enables object discovery methods with larger models, and most importantly leads to smoother feature maps and attention maps for downstream visual processing.

Timoth\'ee Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU50.71
2888
Image ClassificationImageNet-1K
Top-1 Acc83.41
1239
Video Object SegmentationDAVIS 2017 (val)
J mean59.6
1193
Semantic segmentationADE20K
mIoU48.68
1024
Image ClassificationImageNet V2
Top-1 Acc74.8
611
Image ClassificationImageNet-R
Top-1 Acc43.8
529
Image ClusteringCIFAR-10
NMI0.847
318
Image ClusteringSTL-10
ACC72.7
282
Semantic segmentationScanNet (val)
mIoU63.36
274
Image ClassificationImageNet-1K
Accuracy83.41
193
Showing 10 of 42 rows

Other info

Follow for update