Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Modal Classifiers for Open-Vocabulary Object Detection

About

The goal of this paper is open-vocabulary object detection (OVOD) $\unicode{x2013}$ building a model that can detect objects beyond the set of categories seen at training, thus enabling the user to specify categories of interest at inference without the need for model retraining. We adopt a standard two-stage object detector architecture, and explore three ways for specifying novel categories: via language descriptions, via image exemplars, or via a combination of the two. We make three contributions: first, we prompt a large language model (LLM) to generate informative language descriptions for object classes, and construct powerful text-based classifiers; second, we employ a visual aggregator on image exemplars that can ingest any number of images as input, forming vision-based classifiers; and third, we provide a simple method to fuse information from language descriptions and image exemplars, yielding a multi-modal classifier. When evaluating on the challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our text-based classifiers outperform all previous OVOD works; (ii) our vision-based classifiers perform as well as text-based classifiers in prior work; (iii) using multi-modal classifiers perform better than either modality alone; and finally, (iv) our text-based and multi-modal classifiers yield better performance than a fully-supervised detector.

Prannay Kaul, Weidi Xie, Andrew Zisserman• 2023

Related benchmarks

TaskDatasetResultRank
Instance SegmentationLVIS v1.0 (val)--
189
Instance SegmentationLVIS
mAP (Mask)30.6
68
Object DetectionLVIS
APr19.8
59
Instance SegmentationLVIS (val)
APr19.3
46
Showing 4 of 4 rows

Other info

Follow for update