Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aligning and Prompting Everything All at Once for Universal Visual Perception

About

Vision foundation models have been explored recently to build general-purpose vision systems. However, predominant paradigms, driven by casting instance-level tasks as an object-word alignment, bring heavy cross-modality interaction, which is not effective in prompting object detection and visual grounding. Another line of work that focuses on pixel-level tasks often encounters a large annotation gap of things and stuff, and suffers from mutual interference between foreground-object and background-class segmentation. In stark contrast to the prevailing methods, we present APE, a universal visual perception model for aligning and prompting everything all at once in an image to perform diverse tasks, i.e., detection, segmentation, and grounding, as an instance-level sentence-object matching paradigm. Specifically, APE advances the convergence of detection and grounding by reformulating language-guided grounding as open-vocabulary detection, which efficiently scales up model prompting to thousands of category vocabularies and region descriptions while maintaining the effectiveness of cross-modality fusion. To bridge the granularity gap of different pixel-level tasks, APE equalizes semantic and panoptic segmentation to proxy instance learning by considering any isolated regions as individual instances. APE aligns vision and language representation on broad data with natural and challenging characteristics all at once without task-specific fine-tuning. The extensive experiments on over 160 datasets demonstrate that, with only one-suit of weights, APE outperforms (or is on par with) the state-of-the-art models, proving that an effective yet universal perception for anything aligning and prompting is indeed feasible. Codes and trained models are released at https://github.com/shenyunhang/APE.

Yunhang Shen, Chaoyou Fu, Peixian Chen, Mengdan Zhang, Ke Li, Xing Sun, Yunsheng Wu, Shaohui Lin, Rongrong Ji• 2023

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO (val)--
613
Instance SegmentationCOCO (val)--
472
Semantic segmentationCityscapes (val)
mIoU44.2
332
Panoptic SegmentationCityscapes (val)
PQ33.3
276
Object DetectionLVIS (val)
mAP59.6
141
Object DetectionLVIS (minival)
AP64.7
127
Semantic segmentationPASCAL-Context 59 class (val)
mIoU58.6
125
Visual GroundingRefCOCO (testB)
Accuracy80.9
125
Visual GroundingRefCOCO (testA)--
117
Object DetectionODinW-13
AP59.8
98
Showing 10 of 42 rows

Other info

Code

Follow for update