Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SERE: Exploring Feature Self-relation for Self-supervised Transformer

About

Learning representations with self-supervision for convolutional networks (CNN) has been validated to be effective for vision tasks. As an alternative to CNN, vision transformers (ViT) have strong representation ability with spatial self-attention and channel-level feedforward networks. Recent works reveal that self-supervised learning helps unleash the great potential of ViT. Still, most works follow self-supervised strategies designed for CNN, e.g., instance-level discrimination of samples, but they ignore the properties of ViT. We observe that relational modeling on spatial and channel dimensions distinguishes ViT from other networks. To enforce this property, we explore the feature SElf-RElation (SERE) for training self-supervised ViT. Specifically, instead of conducting self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i.e., spatial/channel self-relations, for self-supervised learning. Self-relation based learning further enhances the relation modeling ability of ViT, resulting in stronger representations that stably improve performance on multiple downstream tasks. Our source code is publicly available at: https://github.com/MCG-NKU/SERE.

Zhong-Yu Li, Shanghua Gao, Ming-Ming Cheng• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU50
2731
Object DetectionCOCO 2017 (val)--
2454
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy83.7
1866
Instance SegmentationCOCO 2017 (val)
APm0.405
1144
Image ClassificationCIFAR-10--
507
Image ClassificationStanford Cars--
477
Semantic segmentationPASCAL VOC (val)
mIoU79.7
338
Image ClassificationiNaturalist 2019
Top-1 Acc77.5
98
Image ClassificationOxford Flowers
Top-1 Accuracy98
78
Image ClassificationImageNet 1% labels 1.0 (val)
Top-1 Acc55.9
33
Showing 10 of 13 rows

Other info

Code

Follow for update