Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Boosting vision transformers for image retrieval

About

Vision transformers have achieved remarkable progress in vision tasks such as image classification and detection. However, in instance-level image retrieval, transformers have not yet shown good performance compared to convolutional networks. We propose a number of improvements that make transformers outperform the state of the art for the first time. (1) We show that a hybrid architecture is more effective than plain transformers, by a large margin. (2) We introduce two branches collecting global (classification token) and local (patch tokens) information, from which we form a global image representation. (3) In each branch, we collect multi-layer features from the transformer encoder, corresponding to skip connections across distant layers. (4) We enhance locality of interactions at the deeper layers of the encoder, which is the relative weakness of vision transformers. We train our model on all commonly used training sets and, for the first time, we make fair comparisons separately per training set. In all cases, we outperform previous models based on global representation. Public code is available at https://github.com/dealicious-inc/DToP.

Chull Hwan Song, Jooyoung Yoon, Shunghyun Choi, Yannis Avrithis• 2022

Related benchmarks

TaskDatasetResultRank
Image RetrievalRevisited Oxford (ROxf) (Medium)
mAP70.8
124
Image RetrievalRevisited Paris (RPar) (Hard)
mAP67.9
115
Image RetrievalOxford 5k
mAP90.6
100
Image RetrievalRevisited Paris (RPar) (Medium)
mAP70.9
100
Image RetrievalRevisited Oxford (ROxf) (Hard)
mAP48
81
Image RetrievalParis Revisited (Medium)
mAP83.2
63
Image RetrievalParis6k
mAP94.4
45
Image RetrievalOxford Revisited (Hard)
mAP27.1
33
Image RetrievalRPar+R1M Medium
mAP57.6
31
Image RetrievalRPar+R1M Hard
mAP32.7
31
Showing 10 of 14 rows

Other info

Follow for update