Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

IIR-VLM: In-Context Instance-level Recognition for Large Vision-Language Models

About

Instance-level recognition (ILR) concerns distinguishing individual instances from one another, with person re-identification as a prominent example. Despite the impressive visual perception capabilities of modern VLMs, we find their performance on ILR unsatisfactory, often dramatically underperforming domain-specific ILR models. This limitation hinders many practical application of VLMs, e.g. where recognizing familiar people and objects is crucial for effective visual understanding. Existing solutions typically learn to recognize instances one at a time using instance-specific datasets, which not only incur substantial data collection and training costs but also struggle with fine-grained discrimination. In this work, we propose IIR-VLM, a VLM enhanced for In-context Instance-level Recognition. We integrate pre-trained ILR expert models as auxiliary visual encoders to provide specialized features for learning diverse instances, which enables VLMs to learn new instances in-context in a one-shot manner. Further, IIR-VLM leverages this knowledge for instance-aware visual understanding. We validate IIR-VLM's efficacy on existing instance personalization benchmarks. Finally, we demonstrate its superior ILR performance on a challenging new benchmark, which assesses ILR capabilities across varying difficulty and diverse categories, with person, face, pet and general objects as the instances at task.

Liang Shi, Wei Li, Kevin M Beussman, Lin Chen, Yun Fu• 2026

Related benchmarks

TaskDatasetResultRank
Instance-level recognitionILR Benchmark (test)
Object Accuracy92.9
8
instance-aware caption generationMyVLM benchmark
CLIP Image Similarity27.06
4
Instance IdentificationYo'LLaVA benchmark
Positive Score89
3
Instance IdentificationMyVLM benchmark
Positive Score87.4
3
Instance-level recognitionMM-ID matching tasks
Face Accuracy92.9
2
Showing 5 of 5 rows

Other info

Follow for update