Mitigating Objectness Bias and Region-to-Text Misalignment for Open-Vocabulary Panoptic Segmentation
About
Open-vocabulary panoptic segmentation remains hindered by two coupled issues: (i) mask selection bias, where objectness heads trained on closed vocabularies suppress masks of categories not observed in training, and (ii) limited regional understanding in vision-language models such as CLIP, which were optimized for global image classification rather than localized segmentation. We introduce OVRCOAT, a simple, modular framework that tackles both. First, a CLIP-conditioned objectness adjustment (COAT) updates background/foreground probabilities, preserving high-quality masks for out-of-vocabulary objects. Second, an open-vocabulary mask-to-text refinement (OVR) strengthens CLIP's region-level alignment to improve classification of both seen and unseen classes with markedly lower memory cost than prior fine-tuning schemes. The two components combine to jointly improve objectness estimation and mask recognition, yielding consistent panoptic gains. Despite its simplicity, OVRCOAT sets a new state of the art on ADE20K (+5.5% PQ) and delivers clear gains on Mapillary Vistas and Cityscapes (+7.1% and +3% PQ, respectively). The code is available at: https://github.com/nickormushev/OVRCOAT
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open Vocabulary Semantic Segmentation | PC-459 | mIoU19.1 | 47 | |
| Panoptic Segmentation | Cityscapes | PQ45.3 | 32 | |
| Open Vocabulary Semantic Segmentation | PAS-20 Pascal VOC (test) | mIoU95.5 | 28 | |
| Panoptic Segmentation | COCO | PQ54.6 | 28 | |
| Open Vocabulary Semantic Segmentation | A-150 (test) | mIoU34.3 | 9 | |
| Open Vocabulary Semantic Segmentation | PC-59 (test) | mIoU58.5 | 9 | |
| Open Vocabulary Semantic Segmentation | A-847 (test) | mIoU14.3 | 9 | |
| Panoptic Segmentation | Mapillary Vistas | PQ19.6 | 4 |