Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OLIVE: Object Level In-Context Visual Embeddings

About

Recent generalist vision-language models (VLMs) have demonstrated impressive reasoning capabilities across diverse multimodal tasks. However, these models still struggle with fine-grained object-level understanding and grounding. In terms of modeling, existing VLMs implicitly align text tokens with image patch tokens, which is ineffective for embedding alignment at the same granularity and inevitably introduces noisy spurious background features. Additionally, these models struggle when generalizing to unseen visual concepts and may not be reliable for domain-specific tasks without further fine-tuning. To address these limitations, we propose a novel method to prompt large language models with in-context visual object vectors, thereby enabling controllable object-level reasoning. This eliminates the necessity of fusing a lengthy array of image patch features and significantly speeds up training. Furthermore, we propose region-level retrieval using our object representations, facilitating rapid adaptation to new objects without additional training. Our experiments reveal that our method achieves competitive referring object classification and captioning performance, while also offering zero-shot generalization and robustness to visually challenging contexts.

Timothy Ossowski, Junjie Hu• 2024

Related benchmarks

TaskDatasetResultRank
Referring expression generationRefCOCOg (val)
METEOR17
31
Referring object classificationCXR8 (test)
Accuracy33.5
9
Referring object classificationCOCO (val)
mAP60.4
8
Showing 3 of 3 rows

Other info

Code

Follow for update