Transformer Based Multi-Grained Features for Unsupervised Person Re-Identification
About
Multi-grained features extracted from convolutional neural networks (CNNs) have demonstrated their strong discrimination ability in supervised person re-identification (Re-ID) tasks. Inspired by them, this work investigates the way of extracting multi-grained features from a pure transformer network to address the unsupervised Re-ID problem that is label-free but much more challenging. To this end, we build a dual-branch network architecture based upon a modified Vision Transformer (ViT). The local tokens output in each branch are reshaped and then uniformly partitioned into multiple stripes to generate part-level features, while the global tokens of two branches are averaged to produce a global feature. Further, based upon offline-online associated camera-aware proxies (O2CAP) that is a top-performing unsupervised Re-ID method, we define offline and online contrastive learning losses with respect to both global and part-level features to conduct unsupervised learning. Extensive experiments on three person Re-ID datasets show that the proposed method outperforms state-of-the-art unsupervised methods by a considerable margin, greatly mitigating the gap to supervised counterparts. Code will be available soon at https://github.com/RikoLi/WACV23-workshop-TMGF.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Person Re-Identification | Market1501 (test) | Rank-1 Accuracy95.5 | 1264 | |
| Person Re-Identification | Market 1501 | mAP91.9 | 999 | |
| Person Re-Identification | DukeMTMC-reID | Rank-1 Acc92.3 | 648 | |
| Person Re-Identification | MSMT17 (test) | Rank-1 Acc88.2 | 499 | |
| Person Re-Identification | MSMT17 | mAP0.703 | 404 | |
| Person Re-Identification | Market (test) | mAP91.9 | 14 |