PAND: Prompt-Aware Neighborhood Distillation for Lightweight Fine-Grained Visual Classification
About
Distilling knowledge from large Vision-Language Models (VLMs) into lightweight networks is crucial yet challenging in Fine-Grained Visual Classification (FGVC), due to the reliance on fixed prompts and global alignment. To address this, we propose PAND (Prompt-Aware Neighborhood Distillation), a two-stage framework that decouples semantic calibration from structural transfer. First, we incorporate Prompt-Aware Semantic Calibration to generate adaptive semantic anchors. Second, we introduce a neighborhood-aware structural distillation strategy to constrain the student's local decision structure. PAND consistently outperforms state-of-the-art methods on four FGVC benchmarks. Notably, our ResNet-18 student achieves 76.09% accuracy on CUB-200, surpassing the strong baseline VL2Lite by 3.4%. Code is available at https://github.com/LLLVTA/PAND.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Fine-grained visual classification | FGVC-Aircraft (test) | Top-1 Acc64.75 | 287 | |
| Fine-grained visual classification | CUB-200-2011 (test) | Top-1 Acc0.7652 | 70 | |
| Fine-grained visual classification | Stanford Dogs (test) | Top-1 Acc74.98 | 52 | |
| Fine-grained visual classification | Oxford-IIIT Pet (test) | Top-1 Accuracy88.97 | 10 |