Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BiLMa: Bidirectional Local-Matching for Text-based Person Re-identification

About

Text-based person re-identification (TBPReID) aims to retrieve person images represented by a given textual query. In this task, how to effectively align images and texts globally and locally is a crucial challenge. Recent works have obtained high performances by solving Masked Language Modeling (MLM) to align image/text parts. However, they only performed uni-directional (i.e., from image to text) local-matching, leaving room for improvement by introducing opposite-directional (i.e., from text to image) local-matching. In this work, we introduce Bidirectional Local-Matching (BiLMa) framework that jointly optimize MLM and Masked Image Modeling (MIM) in TBPReID model training. With this framework, our model is trained so as the labels of randomly masked both image and text tokens are predicted by unmasked tokens. In addition, to narrow the semantic gap between image and text in MIM, we propose Semantic MIM (SemMIM), in which the labels of masked image tokens are automatically given by a state-of-the-art human parser. Experimental results demonstrate that our BiLMa framework with SemMIM achieves state-of-the-art Rank@1 and mAP scores on three benchmarks.

Takuro Fujii, Shuhei Tarashima• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-image Person Re-identificationCUHK-PEDES (test)
Rank-1 Accuracy (R-1)74.03
150
Text-based Person SearchCUHK-PEDES (test)
Rank-174.03
142
Text-based Person SearchICFG-PEDES (test)
R@163.83
104
Text-based Person SearchRSTPReid (test)
R@161.2
85
Text-based Person Re-identificationRSTPReid
Rank-1 Accuracy61.2
15
Showing 5 of 5 rows

Other info

Follow for update