Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PAND: Prompt-Aware Neighborhood Distillation for Lightweight Fine-Grained Visual Classification

About

Distilling knowledge from large Vision-Language Models (VLMs) into lightweight networks is crucial yet challenging in Fine-Grained Visual Classification (FGVC), due to the reliance on fixed prompts and global alignment. To address this, we propose PAND (Prompt-Aware Neighborhood Distillation), a two-stage framework that decouples semantic calibration from structural transfer. First, we incorporate Prompt-Aware Semantic Calibration to generate adaptive semantic anchors. Second, we introduce a neighborhood-aware structural distillation strategy to constrain the student's local decision structure. PAND consistently outperforms state-of-the-art methods on four FGVC benchmarks. Notably, our ResNet-18 student achieves 76.09% accuracy on CUB-200, surpassing the strong baseline VL2Lite by 3.4%. Code is available at https://github.com/LLLVTA/PAND.

Qiuming Luo, Yuebing Li, Feng Li, Chang Kong• 2026

Related benchmarks

TaskDatasetResultRank
Fine-grained visual classificationFGVC-Aircraft (test)
Top-1 Acc64.75
287
Fine-grained visual classificationCUB-200-2011 (test)
Top-1 Acc0.7652
70
Fine-grained visual classificationStanford Dogs (test)
Top-1 Acc74.98
52
Fine-grained visual classificationOxford-IIIT Pet (test)
Top-1 Accuracy88.97
10
Showing 4 of 4 rows

Other info

Follow for update