Improving Description-based Person Re-identification by Multi-granularity Image-text Alignments
About
Description-based person re-identification (Re-id) is an important task in video surveillance that requires discriminative cross-modal representations to distinguish different people. It is difficult to directly measure the similarity between images and descriptions due to the modality heterogeneity (the cross-modal problem). And all samples belonging to a single category (the fine-grained problem) makes this task even harder than the conventional image-description matching task. In this paper, we propose a Multi-granularity Image-text Alignments (MIA) model to alleviate the cross-modal fine-grained problem for better similarity evaluation in description-based person Re-id. Specifically, three different granularities, i.e., global-global, global-local and local-local alignments are carried out hierarchically. Firstly, the global-global alignment in the Global Contrast (GC) module is for matching the global contexts of images and descriptions. Secondly, the global-local alignment employs the potential relations between local components and global contexts to highlight the distinguishable components while eliminating the uninvolved ones adaptively in the Relation-guided Global-local Alignment (RGA) module. Thirdly, as for the local-local alignment, we match visual human parts with noun phrases in the Bi-directional Fine-grained Matching (BFM) module. The whole network combining multiple granularities can be end-to-end trained without complex pre-processing. To address the difficulties in training the combination of multiple granularities, an effective step training strategy is proposed to train these granularities step-by-step. Extensive experiments and analysis have shown that our method obtains the state-of-the-art performance on the CUHK-PEDES dataset and outperforms the previous methods by a significant margin.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-based Person Search | CUHK-PEDES (test) | Rank-153.1 | 142 | |
| Text-based Person Search | ICFG-PEDES (test) | R@146.49 | 104 | |
| Text-to-Image Retrieval | CUHK-PEDES (test) | Recall@153.1 | 96 | |
| Text-to-image Person Re-identification | ICFG-PEDES (test) | Rank-10.4649 | 81 | |
| Text-based Person Search | CUHK-PEDES | Recall@153.1 | 61 | |
| Person Search | CUHK-PEDES (test) | Recall@153.1 | 47 | |
| Text-to-image Person Re-identification | CUHK-PEDES | Rank-153.1 | 34 | |
| Text-based Person Retrieval | ICFG-PEDES | R@146.49 | 32 | |
| Text to Image | CUHK-PEDES | Rank-153.1 | 28 | |
| Text-based Person Retrieval | CUHK-PEDES 1.0 (test) | R@153.1 | 15 |