Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Description-based Person Re-identification by Multi-granularity Image-text Alignments

About

Description-based person re-identification (Re-id) is an important task in video surveillance that requires discriminative cross-modal representations to distinguish different people. It is difficult to directly measure the similarity between images and descriptions due to the modality heterogeneity (the cross-modal problem). And all samples belonging to a single category (the fine-grained problem) makes this task even harder than the conventional image-description matching task. In this paper, we propose a Multi-granularity Image-text Alignments (MIA) model to alleviate the cross-modal fine-grained problem for better similarity evaluation in description-based person Re-id. Specifically, three different granularities, i.e., global-global, global-local and local-local alignments are carried out hierarchically. Firstly, the global-global alignment in the Global Contrast (GC) module is for matching the global contexts of images and descriptions. Secondly, the global-local alignment employs the potential relations between local components and global contexts to highlight the distinguishable components while eliminating the uninvolved ones adaptively in the Relation-guided Global-local Alignment (RGA) module. Thirdly, as for the local-local alignment, we match visual human parts with noun phrases in the Bi-directional Fine-grained Matching (BFM) module. The whole network combining multiple granularities can be end-to-end trained without complex pre-processing. To address the difficulties in training the combination of multiple granularities, an effective step training strategy is proposed to train these granularities step-by-step. Extensive experiments and analysis have shown that our method obtains the state-of-the-art performance on the CUHK-PEDES dataset and outperforms the previous methods by a significant margin.

Kai Niu, Yan Huang, Wanli Ouyang, Liang Wang• 2019

Related benchmarks

TaskDatasetResultRank
Text-based Person SearchCUHK-PEDES (test)
Rank-153.1
142
Text-based Person SearchICFG-PEDES (test)
R@146.49
104
Text-to-Image RetrievalCUHK-PEDES (test)
Recall@153.1
96
Text-to-image Person Re-identificationICFG-PEDES (test)
Rank-10.4649
81
Text-based Person SearchCUHK-PEDES
Recall@153.1
61
Person SearchCUHK-PEDES (test)
Recall@153.1
47
Text-to-image Person Re-identificationCUHK-PEDES
Rank-153.1
34
Text-based Person RetrievalICFG-PEDES
R@146.49
32
Text to ImageCUHK-PEDES
Rank-153.1
28
Text-based Person RetrievalCUHK-PEDES 1.0 (test)
R@153.1
15
Showing 10 of 17 rows

Other info

Follow for update