Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond Pedestrians: Caption-Guided CLIP Framework for High-Difficulty Video-based Person Re-Identification

About

In recent years, video-based person Re-Identification (ReID) has gained attention for its ability to leverage spatiotemporal cues to match individuals across non-overlapping cameras. However, current methods struggle with high-difficulty scenarios, such as sports and dance performances, where multiple individuals wear similar clothing while performing dynamic movements. To overcome these challenges, we propose CG-CLIP, a novel caption-guided CLIP framework that leverages explicit textual descriptions and learnable tokens. Our method introduces two key components: Caption-guided Memory Refinement (CMR) and Token-based Feature Extraction (TFE). CMR utilizes captions generated by Multi-modal Large Language Models (MLLMs) to refine identity-specific features, capturing fine-grained details. TFE employs a cross-attention mechanism with fixed-length learnable tokens to efficiently aggregate spatiotemporal features, reducing computational overhead. We evaluate our approach on two standard datasets (MARS and iLIDS-VID) and two newly constructed high-difficulty datasets (SportsVReID and DanceVReID). Experimental results demonstrate that our method outperforms current state-of-the-art approaches, achieving significant improvements across all benchmarks.

Shogo Hamano, Shunya Wakasugi, Tatsuhito Sato, Sayaka Nakamura• 2026

Related benchmarks

TaskDatasetResultRank
Video Person Re-IdentificationMARS v1 (test)
mAP89.8
41
Video-based Person Re-identificationiLIDS-VID v1 (test)
Rank-1 Accuracy96.7
18
Video-based Person Re-identificationDanceVReID v1 (test)
mAP53.8
14
Video-based Person Re-identificationSportsVReID v1 (test)
mAP77.7
13
Showing 4 of 4 rows

Other info

Follow for update