Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting the Textual Potential from Vision-Language Pre-training for Text-based Person Search

About

Text-based Person Search (TPS), is targeted on retrieving pedestrians to match text descriptions instead of query images. Recent Vision-Language Pre-training (VLP) models can bring transferable knowledge to downstream TPS tasks, resulting in more efficient performance gains. However, existing TPS methods improved by VLP only utilize pre-trained visual encoders, neglecting the corresponding textual representation and breaking the significant modality alignment learned from large-scale pre-training. In this paper, we explore the full utilization of textual potential from VLP in TPS tasks. We build on the proposed VLP-TPS baseline model, which is the first TPS model with both pre-trained modalities. We propose the Multi-Integrity Description Constraints (MIDC) to enhance the robustness of the textual modality by incorporating different components of fine-grained corpus during training. Inspired by the prompt approach for zero-shot classification with VLP models, we propose the Dynamic Attribute Prompt (DAP) to provide a unified corpus of fine-grained attributes as language hints for the image modality. Extensive experiments show that our proposed TPS framework achieves state-of-the-art performance, exceeding the previous best method by a margin.

Guanshuo Wang, Fufu Yu, Junjie Li, Qiong Jia, Shouhong Ding• 2023

Related benchmarks

TaskDatasetResultRank
Text-based Person SearchCUHK-PEDES (test)
Rank-170.16
142
Text-based Person SearchICFG-PEDES (test)
R@160.64
104
Text-based Person SearchRSTPReid (test)
R@150.65
85
Showing 3 of 3 rows

Other info

Follow for update