Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Look Before You Leap: Improving Text-based Person Retrieval by Learning A Consistent Cross-modal Common Manifold

About

The core problem of text-based person retrieval is how to bridge the heterogeneous gap between multi-modal data. Many previous approaches contrive to learning a latent common manifold mapping paradigm following a \textbf{cross-modal distribution consensus prediction (CDCP)} manner. When mapping features from distribution of one certain modality into the common manifold, feature distribution of the opposite modality is completely invisible. That is to say, how to achieve a cross-modal distribution consensus so as to embed and align the multi-modal features in a constructed cross-modal common manifold all depends on the experience of the model itself, instead of the actual situation. With such methods, it is inevitable that the multi-modal data can not be well aligned in the common manifold, which finally leads to a sub-optimal retrieval performance. To overcome this \textbf{CDCP dilemma}, we propose a novel algorithm termed LBUL to learn a Consistent Cross-modal Common Manifold (C$^{3}$M) for text-based person retrieval. The core idea of our method, just as a Chinese saying goes, is to `\textit{san si er hou xing}', namely, to \textbf{Look Before yoU Leap (LBUL)}. The common manifold mapping mechanism of LBUL contains a looking step and a leaping step. Compared to CDCP-based methods, LBUL considers distribution characteristics of both the visual and textual modalities before embedding data from one certain modality into C$^{3}$M to achieve a more solid cross-modal distribution consensus, and hence achieve a superior retrieval accuracy. We evaluate our proposed method on two text-based person retrieval datasets CUHK-PEDES and RSTPReid. Experimental results demonstrate that the proposed LBUL outperforms previous methods and achieves the state-of-the-art performance.

Zijie Wang, Aichun Zhu, Jingyi Xue, Xili Wan, Chao Liu, Tian Wang, Yifeng Li• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-image Person Re-identificationCUHK-PEDES (test)
Rank-1 Accuracy (R-1)64.04
150
Text-based Person SearchCUHK-PEDES (test)
Rank-164.04
142
Text-to-Image RetrievalCUHK-PEDES (test)
Recall@164.04
96
Text-based Person SearchRSTPReid (test)
R@145.55
85
Text-based Person SearchCUHK-PEDES
Recall@164.04
61
Text-based Person Re-identificationRSTPReid (test)
Rank-1 Acc45.55
52
Text-to-image person retrievalRSTPReid
Rank-1 Accuracy45.55
32
Text to ImageCUHK-PEDES
Rank-164.04
28
Text-to-image person retrievalRSTPReid (test)
Rank-1 Accuracy45.55
17
Text-based Person Re-identificationRSTPReid
Rank-1 Accuracy45.55
15
Showing 10 of 10 rows

Other info

Follow for update