Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ALADIN:Attribute-Language Distillation Network for Person Re-Identification

About

Recent vision-language models such as CLIP provide strong cross-modal alignment, but current CLIP-guided ReID pipelines rely on global features and fixed prompts. This limits their ability to capture fine-grained attribute cues and adapt to diverse appearances. We propose ALADIN, an attribute-language distillation network that distills knowledge from a frozen CLIP teacher to a lightweight ReID student. ALADIN introduces fine-grained attribute-local alignment to establish adaptive text-visual correspondence and robust representation learning. A Scene-Aware Prompt Generator produces image-specific soft prompts to facilitate adaptive alignment. Attribute-local distillation enforces consistency between textual attributes and local visual features, significantly enhancing robustness under occlusions. Furthermore, we employ cross-modal contrastive and relation distillation to preserve the inherent structural relationships among attributes. To provide precise supervision, we leverage Multimodal LLMs to generate structured attribute descriptions, which are then converted into localized attention maps via CLIP. At inference, only the student is used. Experiments on Market-1501, DukeMTMC-reID, and MSMT17 show improvements over CNN-, Transformer-, and CLIP-based methods, with better generalization and interpretability.

Wang Zhou, Boran Duan, Haojun Ai, Ruiqi Lan, Ziyue Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Person Re-IdentificationMSMT17
mAP0.688
514
Person Re-IdentificationDukeMTMC
R1 Accuracy91.7
162
Person Re-IdentificationMarket1501
mAP0.911
119
Showing 3 of 3 rows

Other info

Follow for update