Universal Instance Perception as Object Discovery and Retrieval
About
All instance perception tasks aim at finding certain objects specified by some queries such as category names, language expressions, and target annotations, but this complete field has been split into multiple independent subtasks. In this work, we present a universal instance perception model of the next generation, termed UNINEXT. UNINEXT reformulates diverse instance perception tasks into a unified object discovery and retrieval paradigm and can flexibly perceive different types of objects by simply changing the input prompts. This unified formulation brings the following benefits: (1) enormous data from different tasks and label vocabularies can be exploited for jointly training general instance-level representations, which is especially beneficial for tasks lacking in training data. (2) the unified model is parameter-efficient and can save redundant computation when handling multiple tasks simultaneously. UNINEXT shows superior performance on 20 challenging benchmarks from 10 instance-level tasks including classical image-level tasks (object detection and instance segmentation), vision-and-language tasks (referring expression comprehension and segmentation), and six video-level object tracking tasks. Code is available at https://github.com/MasterBin-IIAU/UNINEXT.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | COCO 2017 (val) | AP60.6 | 2454 | |
| Instance Segmentation | COCO 2017 (val) | -- | 1144 | |
| Video Object Segmentation | DAVIS 2017 (val) | J mean77.7 | 1130 | |
| Object Detection | COCO (val) | -- | 613 | |
| Video Instance Segmentation | YouTube-VIS 2019 (val) | AP66.9 | 567 | |
| Video Object Segmentation | YouTube-VOS 2018 (val) | J Score (Seen)79.9 | 493 | |
| Visual Object Tracking | TrackingNet (test) | Normalized Precision (Pnorm)88.2 | 460 | |
| Visual Object Tracking | LaSOT (test) | AUC72.4 | 444 | |
| Referring Expression Comprehension | RefCOCO+ (val) | Accuracy85.24 | 345 | |
| Referring Expression Comprehension | RefCOCO (val) | Accuracy92.64 | 335 |