Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mitigating Objectness Bias and Region-to-Text Misalignment for Open-Vocabulary Panoptic Segmentation

About

Open-vocabulary panoptic segmentation remains hindered by two coupled issues: (i) mask selection bias, where objectness heads trained on closed vocabularies suppress masks of categories not observed in training, and (ii) limited regional understanding in vision-language models such as CLIP, which were optimized for global image classification rather than localized segmentation. We introduce OVRCOAT, a simple, modular framework that tackles both. First, a CLIP-conditioned objectness adjustment (COAT) updates background/foreground probabilities, preserving high-quality masks for out-of-vocabulary objects. Second, an open-vocabulary mask-to-text refinement (OVR) strengthens CLIP's region-level alignment to improve classification of both seen and unseen classes with markedly lower memory cost than prior fine-tuning schemes. The two components combine to jointly improve objectness estimation and mask recognition, yielding consistent panoptic gains. Despite its simplicity, OVRCOAT sets a new state of the art on ADE20K (+5.5% PQ) and delivers clear gains on Mapillary Vistas and Cityscapes (+7.1% and +3% PQ, respectively). The code is available at: https://github.com/nickormushev/OVRCOAT

Nikolay Kormushev, Josip \v{S}ari\'c, Matej Kristan• 2026

Related benchmarks

TaskDatasetResultRank
Open Vocabulary Semantic SegmentationPC-459
mIoU19.1
47
Panoptic SegmentationCityscapes
PQ45.3
32
Open Vocabulary Semantic SegmentationPAS-20 Pascal VOC (test)
mIoU95.5
28
Panoptic SegmentationCOCO
PQ54.6
28
Open Vocabulary Semantic SegmentationA-150 (test)
mIoU34.3
9
Open Vocabulary Semantic SegmentationPC-59 (test)
mIoU58.5
9
Open Vocabulary Semantic SegmentationA-847 (test)
mIoU14.3
9
Panoptic SegmentationMapillary Vistas
PQ19.6
4
Showing 8 of 8 rows

Other info

Follow for update