Content and Salient Semantics Collaboration for Cloth-Changing Person Re-Identification
About
Cloth-changing person re-identification aims at recognizing the same person with clothing changes across non-overlapping cameras. Advanced methods either resort to identity-related auxiliary modalities (e.g., sketches, silhouettes, and keypoints) or clothing labels to mitigate the impact of clothes. However, relying on unpractical and inflexible auxiliary modalities or annotations limits their real-world applicability. In this paper, we promote cloth-changing person re-identification by leveraging abundant semantics present within pedestrian images, without the need for any auxiliaries. Specifically, we first propose a unified Semantics Mining and Refinement (SMR) module to extract robust identity-related content and salient semantics, mitigating interference from clothing appearances effectively. We further propose the Content and Salient Semantics Collaboration (CSSC) framework to collaborate and leverage various semantics, facilitating cross-parallel semantic interaction and refinement. Our proposed method achieves state-of-the-art performance on three cloth-changing benchmarks, demonstrating its superiority over advanced competitors. The code is available at https://github.com/QizaoWang/CSSC-CCReID.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Person Re-Identification | PRCC Clothes-Changing | Top-1 Acc65.5 | 76 | |
| Person Re-Identification | LTCC cloth-changing | Rank-143.6 | 60 | |
| Person Re-Identification | PRCC (standard split) | Rank-1 Acc100 | 30 | |
| Person Re-Identification | LTCC | Rank-1 Acc78.1 | 24 | |
| Person Re-Identification | Celeb-reID | Rank-164.5 | 22 |