CLIP-based Synergistic Knowledge Transfer for Text-based Person Retrieval
About
Text-based Person Retrieval (TPR) aims to retrieve the target person images given a textual query. The primary challenge lies in bridging the substantial gap between vision and language modalities, especially when dealing with limited large-scale datasets. In this paper, we introduce a CLIP-based Synergistic Knowledge Transfer (CSKT) approach for TPR. Specifically, to explore the CLIP's knowledge on input side, we first propose a Bidirectional Prompts Transferring (BPT) module constructed by text-to-image and image-to-text bidirectional prompts and coupling projections. Secondly, Dual Adapters Transferring (DAT) is designed to transfer knowledge on output side of Multi-Head Attention (MHA) in vision and language. This synergistic two-way collaborative mechanism promotes the early-stage feature fusion and efficiently exploits the existing knowledge of CLIP. CSKT outperforms the state-of-the-art approaches across three benchmark datasets when the training parameters merely account for 7.4% of the entire model, demonstrating its remarkable efficiency, effectiveness and generalization.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-image Person Re-identification | CUHK-PEDES (test) | Rank-1 Accuracy (R-1)69.7 | 150 | |
| Text-based Person Search | CUHK-PEDES (test) | Rank-169.7 | 142 | |
| Text-based Person Search | ICFG-PEDES (test) | R@158.9 | 104 | |
| Text-based Person Search | RSTPReid (test) | R@157.75 | 85 | |
| Text-to-image Person Re-identification | ICFG-PEDES (test) | Rank-10.589 | 81 | |
| Text-based Person Re-identification | RSTPReid (test) | Rank-1 Acc57.75 | 52 |